K8S 1.21.3 高可用部署
k8s安装包https://github.com/gghuogg/k8s-Installation-package或https://gitee.com/gaohaixiang192/k8s系统版本[root@k8s-node2 ~]# cat /etc/redhat-releaseCentOS Linux release 7.4.1708 (Core)1、关闭firewalld和selinux(所
·
k8s安装包
https://github.com/gghuogg/k8s-Installation-package
或
https://gitee.com/gaohaixiang192/k8s
系统版本
[root@k8s-master1 k8s]# cat /etc/redhat-release
CentOS Linux release 7.4.1708 (Core)
docker版本
[root@k8s-master1 k8s]# docker -v
Docker version 18.06.3-ce, build d7080c1
k8s版本
[root@k8s-master1 k8s]# kubelet --version
Kubernetes v1.21.3
[root@k8s-master1 k8s]# kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.3", GitCommit:"ca643a4d1f7bfe34773c74f79527be4afd95bf39", GitTreeState:"clean", BuildDate:"2021-07-15T21:04:39Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
[root@k8s-master1 k8s]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.3", GitCommit:"ca643a4d1f7bfe34773c74f79527be4afd95bf39", GitTreeState:"clean", BuildDate:"2021-07-15T21:03:28Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"}
1、ip规划
k8s-master1 : 192.168.73.130
k8s-master2 : 192.168.73.131
k8s-master3 : 192.168.73.132
k8s-node1 : 192.168.73.133
k8s-node2 : 192.168.73.134
2、ip配置
nmcli connection modify ens33 ipv4.method manual ipv4.address 192.168.73.130/24 ipv4.gateway 192.168.73.2 connection.autoconnect yes
nmcli connection up ens33
nmcli connection modify ens33 ipv4.method manual ipv4.address 192.168.73.131/24 ipv4.gateway 192.168.73.2 connection.autoconnect yes
nmcli connection up ens33
nmcli connection modify ens33 ipv4.method manual ipv4.address 192.168.73.132/24 ipv4.gateway 192.168.73.2 connection.autoconnect yes
nmcli connection up ens33
nmcli connection modify ens33 ipv4.method manual ipv4.address 192.168.73.133/24 ipv4.gateway 192.168.73.2 connection.autoconnect yes
nmcli connection up ens33
nmcli connection modify ens33 ipv4.method manual ipv4.address 192.168.73.134/24 ipv4.gateway 192.168.73.2 connection.autoconnect yes
nmcli connection up ens33
1、关闭firewalld和selinux(所有主机)
vi /etc/selinux/config
getenforce
setenforce 0
getenforce
systemctl stop firewalld
systemctl disable firewalld
2、配置解析/etc/hosts(所有主机)及主机名
hostnamectl set-hostname k8s-master1
hostnamectl set-hostname k8s-master2
hostnamectl set-hostname k8s-master3
hostnamectl set-hostname k8s-node1
hostnamectl set-hostname k8s-node2
vim /etc/hosts
192.168.73.130 k8s-master1
192.168.73.131 k8s-master2
192.168.73.132 k8s-master3
192.168.73.133 k8s-node1
192.168.73.134 k8s-node2
将文件传入其他主机
for i in {131..134};do scp /etc/hosts 192.168.73.$i:/etc/hosts ;done
3、添加内核参数文件 /etc/sysctl.d/k8s.conf(所有主机)
vi /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
4、执行命令(所有主机)
modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf
mkdir -p /data/k8s ;cd /data/k8s ;yum -y install git
git init
git pull https://gitee.com/gaohaixiang192/k8s.git
yum install -y yum-utils device-mapper-persistent-data lvm2
yum install -y epel-release conntrack ipvsadm ipset jq sysstat curl iptables libseccomp
yum -y remove libselinux-python libselinux-utils
yum -y install libsepol-2.5-10.el7.x86_64.rpm
yum -y install libselinux-2.5-15.el7.x86_64.rpm
yum -y install libselinux-python-2.5-15.el7.x86_64.rpm libselinux-utils-2.5-15.el7.x86_64.rpm
yum -y install libsemanage-2.5-14.el7.x86_64.rpm libsemanage-python-2.5-14.el7.x86_64.rpm
yum -y install setools-libs-3.3.8-4.el7.x86_64.rpm socat-1.7.3.2-2.el7.x86_64.rpm yum-utils-1.1.31-54.el7_8.noarch.rpm
yum -y install policycoreutils-2.5-34.el7.x86_64.rpm policycoreutils-python-2.5-34.el7.x86_64.rpm
yum -y install python-IPy-0.75-6.el7.noarch.rpm selinux-policy-3.13.1-268.el7_9.2.noarch.rpm selinux-policy-targeted-3.13.1-268.el7_9.2.noarch.rpm
yum -y install device-mapper-persistent-data-0.8.5-3.el7_9.2.x86_64.rpm
yum -y install checkpolicy-2.5-8.el7.x86_64.rpm conntrack-tools-1.4.4-7.el7.x86_64.rpm
yum -y install audit-2.8.5-4.el7.x86_64.rpm audit-libs-2.8.5-4.el7.x86_64.rpm audit-libs-python-2.8.5-4.el7.x86_64.rpm
yum -y install container-selinux-2.119.2-1.911c772.el7_8.noarch.rpm
yum -y install docker-ce-18.06.3.ce-3.el7.x86_64.rpm
docker -v
yum -y install 14bfe6e75a9efc8eca3f638eb22c7e2ce759c67f95b43b16fae4ebabde1549f3-cri-tools-1.13.0-0.x86_64.rpm
yum -y install db7cb5cb0b3f6875f54d10f02e625573988e3e91fd4fc5eef0b1876bb18604ad-kubernetes-cni-0.8.7-0.x86_64.rpm 7e38e980f058e3e43f121c2ba73d60156083d09be0acc2e5581372136ce11a1c-kubelet-1.21.3-0.x86_64.rpm
yum -y install b04e5387f5522079ac30ee300657212246b14279e2ca4b58415c7bf1f8c8a8f5-kubectl-1.21.3-0.x86_64.rpm
yum -y install 23f7e018d7380fc0c11f0a12b7fda8ced07b1c04c4ba1c5f5cd24cd4bdfb304d-kubeadm-1.21.3-0.x86_64.rpm
kubelet --version
kubectl version
kubeadm version
systemctl enable docker kubelet
systemctl restart docker
5、镜像下载,打标签,删除不用镜像
docker pull registry.cn-hangzhou.aliyuncs.com/gaohaixiangk8s/kube-apiserver:v1.21.3
docker pull registry.cn-hangzhou.aliyuncs.com/gaohaixiangk8s/kube-scheduler:v1.21.3
docker pull registry.cn-hangzhou.aliyuncs.com/gaohaixiangk8s/kube-proxy:v1.21.3
docker pull registry.cn-hangzhou.aliyuncs.com/gaohaixiangk8s/kube-controller-manager:v1.21.3
docker pull registry.cn-hangzhou.aliyuncs.com/gaohaixiangk8s/coredns:v1.8.0
docker pull registry.cn-hangzhou.aliyuncs.com/gaohaixiangk8s/pause:3.4.1
docker pull registry.cn-hangzhou.aliyuncs.com/gaohaixiangk8s/flannel:v0.13.0-amd64
docker pull registry.cn-hangzhou.aliyuncs.com/gaohaixiangk8s/etcd:3.4.13-0
docker tag registry.cn-hangzhou.aliyuncs.com/gaohaixiangk8s/kube-apiserver:v1.21.3 k8s.gcr.io/kube-apiserver:v1.21.3
docker tag registry.cn-hangzhou.aliyuncs.com/gaohaixiangk8s/kube-scheduler:v1.21.3 k8s.gcr.io/kube-scheduler:v1.21.3
docker tag registry.cn-hangzhou.aliyuncs.com/gaohaixiangk8s/kube-proxy:v1.21.3 k8s.gcr.io/kube-proxy:v1.21.3
docker tag registry.cn-hangzhou.aliyuncs.com/gaohaixiangk8s/kube-controller-manager:v1.21.3 k8s.gcr.io/kube-controller-manager:v1.21.3
docker tag registry.cn-hangzhou.aliyuncs.com/gaohaixiangk8s/coredns:v1.8.0 k8s.gcr.io/coredns/coredns:v1.8.0
docker tag registry.cn-hangzhou.aliyuncs.com/gaohaixiangk8s/pause:3.4.1 k8s.gcr.io/pause:3.4.1
docker tag registry.cn-hangzhou.aliyuncs.com/gaohaixiangk8s/flannel:v0.13.0-amd64 quay.io/coreos/flannel:v0.13.0-amd64
docker tag registry.cn-hangzhou.aliyuncs.com/gaohaixiangk8s/etcd:3.4.13-0 k8s.gcr.io/etcd:3.4.13-0
docker rmi registry.cn-hangzhou.aliyuncs.com/gaohaixiangk8s/kube-apiserver:v1.21.3
docker rmi registry.cn-hangzhou.aliyuncs.com/gaohaixiangk8s/kube-scheduler:v1.21.3
docker rmi registry.cn-hangzhou.aliyuncs.com/gaohaixiangk8s/kube-proxy:v1.21.3
docker rmi registry.cn-hangzhou.aliyuncs.com/gaohaixiangk8s/kube-controller-manager:v1.21.3
docker rmi registry.cn-hangzhou.aliyuncs.com/gaohaixiangk8s/coredns:v1.8.0
docker rmi registry.cn-hangzhou.aliyuncs.com/gaohaixiangk8s/pause:3.4.1
docker rmi registry.cn-hangzhou.aliyuncs.com/gaohaixiangk8s/flannel:v0.13.0-amd64
docker rmi registry.cn-hangzhou.aliyuncs.com/gaohaixiangk8s/etcd:3.4.13-0
6、打包镜像,传入其他主机
docker save k8s.gcr.io/kube-apiserver:v1.21.3 > kube-apiserver.tar
docker save k8s.gcr.io/kube-scheduler:v1.21.3 > kube-scheduler.tar
docker save k8s.gcr.io/kube-controller-manager:v1.21.3 > kube-controller-manager.tar
docker save k8s.gcr.io/kube-proxy:v1.21.3 > kube-proxy.tar
docker save k8s.gcr.io/coredns/coredns:v1.8.0 > coredns.tar
docker save k8s.gcr.io/pause:3.4.1 > pause.tar
docker save quay.io/coreos/flannel:v0.13.0-amd64 > flannel.tar
docker save k8s.gcr.io/etcd:3.4.13-0 > etcd.tar
7、将打包好的镜像导入主机
docker load < kube-apiserver.tar
docker load < kube-scheduler.tar
docker load < kube-controller-manager.tar
docker load < kube-proxy.tar
docker load < coredns.tar
docker load < pause.tar
docker load < flannel.tar
docker load < etcd.tar
8、关闭swap(所有主机)
swapoff -a
sysctl -p /etc/sysctl.d/k8s.conf
注释掉/etc/fstab中的swap条目
mount -a
echo "KUBELET_EXTRA_ARGS=--fail-swap-on=false" > /etc/sysconfig/kubelet
9、设置开机自启动
systemctl enable kubelet.service docker
systemctl restart docker
systemctl restart kubelet
10、haproxy部署
k8s-master1节点
yum install haproxy -y
[root@k8s-node3 yum.repos.d]# cat /etc/haproxy/haproxy.cfg
# /etc/haproxy/haproxy.cfg
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
log /dev/log local0
log /dev/log local1 notice
daemon
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
#defaults
listen stats 0.0.0.0:12345
mode http
log global
maxconn 10
stats enable
stats hide-version
stats refresh 30s
stats show-node
#stats auth admin:p@sssw0rd
stats uri /stats
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 1
timeout http-request 10s
timeout queue 20s
timeout connect 5s
timeout client 20s
timeout server 20s
timeout http-keep-alive 10s
timeout check 10s
#---------------------------------------------------------------------
# apiserver frontend which proxys to the masters
#---------------------------------------------------------------------
frontend apiserver
bind 0.0.0.0:12567
mode tcp
option tcplog
default_backend kube-api-server
#---------------------------------------------------------------------
# round robin balancing for apiserver
#---------------------------------------------------------------------
backend kube-api-server
option httpchk GET /healthz
http-check expect status 200
mode tcp
option ssl-hello-chk
balance roundrobin
server k8s-master1 192.168.73.130:6443 check
server k8s-master2 192.168.73.131:6443 check
server k8s-master3 192.168.73.132:6443 check
[root@k8s-node3 yum.repos.d]# systemctl enable haproxy --now
[root@k8s-node3 yum.repos.d]# systemctl restart haproxy
11、访问haproxy页面查看k8s集群健康状态
http://192.168.73.130:12345/stats
12、etcd集群安装
/data/etcd/etcd.conf
etced配置文件
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/data/etcddata"
ETCD_LISTEN_PEER_URLS="https://192.168.73.130:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.73.130:2379,https://127.0.0.1:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.73.130:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.73.130:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.73.130:2380,etcd02=https://192.168.73.131:2380,etcd03=https://192.168.73.132:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_NAME="etcd02"
ETCD_DATA_DIR="/data/etcddata"
ETCD_LISTEN_PEER_URLS="https://192.168.73.131:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.73.131:2379,https://127.0.0.1:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.73.131:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.73.131:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.73.130:2380,etcd02=https://192.168.73.131:2380,etcd03=https://192.168.73.132:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_NAME="etcd03"
ETCD_DATA_DIR="/data/etcddata"
ETCD_LISTEN_PEER_URLS="https://192.168.73.132:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.73.132:2379,https://127.0.0.1:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.73.132:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.73.132:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.73.130:2380,etcd02=https://192.168.73.131:2380,etcd03=https://192.168.73.132:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
/usr/lib/systemd/system/etcd.service
etcd启动文件
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
WorkingDirectory=/data/etcd/
EnvironmentFile=/data/etcd/etcd.conf
ExecStart=/data/etcd/etcd \
--initial-cluster-state=new \
--cert-file=/data/etcd/ssl/server.pem \
--key-file=/data/etcd/ssl/server-key.pem \
--peer-cert-file=/data/etcd/ssl/server.pem \
--peer-key-file=/data/etcd/ssl/server-key.pem \
--trusted-ca-file=/data/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/data/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
13、创建TLS证书
mkdir -p /data/etcd/ssl && cd $_
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
mv cfssl_linux-amd64 /usr/bin/cfssl
mv cfssljson_linux-amd64 /usr/bin/cfssljson
chmod +x /usr/bin/cfssl*
tls.sh 文件内容如下全部内容(先修改其中的IP地址,其中hosts尽可能多加)
# etcd
# cat ca-config.json
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"www": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
# cat ca-csr.json
cat > ca-csr.json <<EOF
{
"CN": "etcd CA",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing"
}
]
}
EOF
# cat server-csr.json
cat > server-csr.json <<EOF
{
"CN": "etcd",
"hosts": [
"127.0.0.1",
"192.168.73.130",
"192.168.73.131",
"192.168.73.132",
"192.168.73.158",
"192.168.73.157",
"192.168.73.156"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing"
}
]
}
EOF
执行如下命令
sh tls.sh
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
ls *.pem
然后将生成的4个pem文件证书复制到各个机器的/data/etcd/ssl目录中
启动服务etcd
systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
14、查看集群状态
集群状态主要是etcdctl endpoint status和etcdctl endpoint health两条命令
cd /data/etcd/ && ./etcdctl \
--endpoints="https://192.168.73.130:2379,https://192.168.73.131:2379,https://192.168.73.132:2379" \
--cacert=ssl/ca.pem \
--key=ssl/server-key.pem \
--cert=ssl/server.pem \
endpoint health
cd /data/etcd/ && ./etcdctl \
--endpoints="https://192.168.73.130:2379,https://192.168.73.131:2379,https://192.168.73.132:2379" \
--cacert=ssl/ca.pem \
--key=ssl/server-key.pem \
--cert=ssl/server.pem \
endpoint status
cd /data/etcd/ && ./etcdctl \
--endpoints="https://192.168.73.130:2379,https://192.168.73.131:2379,https://192.168.73.132:2379" \
--cacert=ssl/ca.pem \
--key=ssl/server-key.pem \
--cert=ssl/server.pem \
endpoint health endpoint health
15、k8s集群初始化
/etc/kubernetes/kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.73.130 # 本机IP
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: k8s-master1 # 本主机名
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "192.168.73.130:12567" # 虚拟IP和haproxy端口
controllerManager: {}
dns:
type: CoreDNS
etcd:
external:
endpoints:
- https://192.168.73.130:2379
- https://192.168.73.131:2379
- https://192.168.73.132:2379
caFile: /data/etcd/ssl/ca.pem
certFile: /data/etcd/ssl/server.pem
keyFile: /data/etcd/ssl/server-key.pem
imageRepository: k8s.gcr.io # 镜像仓库源要根据自己实际情况修改
kind: ClusterConfiguration
kubernetesVersion: v1.21.3 # k8s版本
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
# kube-proxy会报错,注释后未测试其他是否正常
#featureGates:
# SupportIPVSProxyMode: true
mode: ipvs
集群初始化
初始化日志
[root@k8s-master1 kubernetes]# kubeadm init --config=kubeadm-config.yaml --upload-certs --ignore-preflight-errors=Swap
[init] Using Kubernetes version: v1.21.3
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.73.130]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 13.565006 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
4c0d22bbb9c2186535fc5bb32604234b1e6a6ee84658b6f130950589e3ebbdd3
[mark-control-plane] Marking the node k8s-master1 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
kubeadm join 192.168.73.130:12567 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:10a95824410711da6a83a0f13a5a3f16df18f036fc540f531166dc437da0d130 \
--control-plane --certificate-key 4c0d22bbb9c2186535fc5bb32604234b1e6a6ee84658b6f130950589e3ebbdd3
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.73.130:12567 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:10a95824410711da6a83a0f13a5a3f16df18f036fc540f531166dc437da0d130
[root@k8s-master1 kubernetes]#
主节点加入集群
kubeadm join 192.168.73.130:12567 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:10a95824410711da6a83a0f13a5a3f16df18f036fc540f531166dc437da0d130 \
--control-plane --certificate-key 4c0d22bbb9c2186535fc5bb32604234b1e6a6ee84658b6f130950589e3ebbdd3
从节点加入集群
kubeadm join 192.168.73.130:12567 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:10a95824410711da6a83a0f13a5a3f16df18f036fc540f531166dc437da0d130
16、其他主节点加入集群
k8s-master2加入集群
[root@k8s-master2 kubernetes]# kubeadm join 192.168.73.130:12567 --token abcdef.0123456789abcdef \
> --discovery-token-ca-cert-hash sha256:10a95824410711da6a83a0f13a5a3f16df18f036fc540f531166dc437da0d130 \
> --control-plane --certificate-key 4c0d22bbb9c2186535fc5bb32604234b1e6a6ee84658b6f130950589e3ebbdd3
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master2 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.73.131 192.168.73.130]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Skipping etcd check in external mode
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[control-plane-join] using external etcd - no local stacked instance added
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node k8s-master2 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master2 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
This node has joined the cluster and a new control plane instance was created:
* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
To start administering your cluster from this node, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Run 'kubectl get nodes' to see this node join the cluster.
k8s-master3加入集群
[root@k8s-master3 kubernetes]# kubeadm join 192.168.73.130:12567 --token abcdef.0123456789abcdef \
> --discovery-token-ca-cert-hash sha256:10a95824410711da6a83a0f13a5a3f16df18f036fc540f531166dc437da0d130 \
> --control-plane --certificate-key 4c0d22bbb9c2186535fc5bb32604234b1e6a6ee84658b6f130950589e3ebbdd3
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master3 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.73.132 192.168.73.130]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Skipping etcd check in external mode
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[control-plane-join] using external etcd - no local stacked instance added
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node k8s-master3 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master3 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
This node has joined the cluster and a new control plane instance was created:
* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
To start administering your cluster from this node, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Run 'kubectl get nodes' to see this node join the cluster.
17、node节点加入集群
k8s-node1加入集群
[root@k8s-node1 kubernetes]# kubeadm join 192.168.73.130:12567 --token abcdef.0123456789abcdef \
> --discovery-token-ca-cert-hash sha256:10a95824410711da6a83a0f13a5a3f16df18f036fc540f531166dc437da0d130
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
k8s-node2加入集群
[root@k8s-node2 kubernetes]# kubeadm join 192.168.73.130:12567 --token abcdef.0123456789abcdef \
> --discovery-token-ca-cert-hash sha256:10a95824410711da6a83a0f13a5a3f16df18f036fc540f531166dc437da0d130
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
18、若证书过期后,可以重新更新证书和token认证,然后再加入
重新更新证书
[root@k8s-master1 kubernetes]# kubeadm init phase upload-certs --upload-certs
W0803 17:38:07.670046 44778 version.go:102] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get "https://dl.k8s.io/release/stable-1.txt": dial tcp: lookup dl.k8s.io on [::1]:53: read udp [::1]:56684->[::1]:53: read: connection refused
W0803 17:38:07.670925 44778 version.go:103] falling back to the local client version: v1.21.3
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
63e54ed5b06bc1f7b93adaeda0df792b12064c11f9d274d9f3d2b5b012cbc584
重新生成token认证
[root@k8s-master1 kubernetes]# kubeadm token generate
9psojs.mqrgtud16qjymfok
19、部署flannel,开网
https://gitee.com/gaohaixiang192/k8s/blob/main/kube-flannel.yml
注意文件中的镜像和本地镜像统一,否则需要下载镜像
kubectl create -f kube-flannel.yml
20、其他在安装过程中的报错
kubectl-proxy报错,在kubeadm-conf.yaml文件中已经更改,若有其他报错,则更改回来,后续报错,则按此更改
server.go:482] failed complete: unrecognized feature gate: SupportIPVSProxyMode
更改k8s的环境
kubectl edit cm kube-proxy -n kube-system
注释掉下面的两行
#featureGates:
# SupportIPVSProxyMode: true
删除kube-proxy pod,等待pod重启
创建pod失败报错
network: failed to set bridge addr: "cni0" already has an IP address different from 10.244.3.1/24
cni0网卡和flannel网卡的网段不一样导致的
删除cni0网卡,让其自己重新创建网卡
ifconfig cni0 down
ip link delete cni0
21、测试
可以按此简单测试
https://blog.csdn.net/liao__ran/article/details/102647786
https://blog.csdn.net/liao__ran/article/details/106009227
更多推荐
已为社区贡献25条内容
所有评论(0)