二进制安装方式下1.17.4版本k8s添加node节点
一:前提条件有一个二进制安装的k8s集群,且已经有若干个节点,证书文件都在,我这里的版本是1.17.4,如下图,此集群目前有四个node节点,各个组件也很正常,要添加的节点ip为172.16.2.145,主机名为node5二:node节点需要的文件kubelet,kube-proxy,flannel的二进制文件,这里,kubelet和kube-proxy版本同k8s保持一致,即1.17.4,fla
一:前提条件
有一个二进制安装的k8s集群,且已经有若干个节点,证书文件都在,我这里的版本是1.17.4,如下图,此集群目前有四个node节点,各个组件也很正常,要添加的节点ip为172.16.2.145,主机名为node5
二:node节点需要的文件
- kubelet,kube-proxy,flannel的二进制文件,这里,kubelet和kube-proxy版本同k8s保持一致,即1.17.4,flannel我这里用的是v0.11.0
- kubelet,kube-proxy,flannel的配置文件和system管理文件,以及kubelet和kube-proxy的认证文件
- kube-proxy的证书,k8s的证书,etcd的证书(etcd和k8s使用的是两套证书,不能搞混)
三:初始化节点
关闭防火墙,selinux,swap
[root@node5 ~]# systemctl stop firewalld ; systemctl disable firewalld
[root@node5 ~]# setenforce 0
setenforce: SELinux is disabled
[root@node5 ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
[root@node5 ~]# cat >> /etc/sysctl.conf << EOF
vm.swappiness = 0
EOF
[root@node5 ~]# sysctl -p > /dev/null
[root@node5 ~]# swapoff -a
[root@node5 ~]# sed -ri 's/.*swap.*/#&/' /etc/fstab
[root@node5 ~]# free -m
total used free shared buff/cache available
Mem: 3790 516 1876 96 1396 2816
Swap: 0 0 0
安装docker
[root@node5 ~]# yum install docker-ce -y
[root@node5 ~]# docker -v
Docker version 19.03.8, build afacb8b
四:准备所需文件
从节点1上拷贝二进制文件到node5节点,为方便理解,我scp之前,让大家看看当前所在目录
[root@k8s-bin-node01 bin]# pwd
/data/flannel/bin
[root@k8s-bin-node01 bin]# scp flanneld mk-docker-opts.sh root@172.16.2.145:/data/flannel/bin/
[root@k8s-bin-node01 bin]# pwd
/data/kubernetes/bin
[root@k8s-bin-node01 bin]# scp kubelet kube-proxy root@172.16.2.145:/data/kubernetes/bin/
从节点1拷贝kubelet,kube-proxy,flannel的配置文件和system管理文件以及kubelet和kube-proxy的认证文件
[root@k8s-bin-node01 cfg]# pwd
/data/flannel/cfg
[root@k8s-bin-node01 cfg]# scp flanneld root@172.16.2.145:/data/flannel/cfg/
[root@k8s-bin-node01 cfg]# pwd
/data/kubernetes/cfg
[root@k8s-bin-node01 cfg]# scp kubelet kube-proxy root@172.16.2.145:/data/kubernetes/cfg/
[root@k8s-bin-node01 system]# pwd
/usr/lib/systemd/system
[root@k8s-bin-node01 system]# scp kubelet.service kube-proxy.service flanneld.service root@172.16.2.145:/usr/lib/systemd/system
[root@k8s-bin-node01 cfg]# pwd
/data/kubernetes/cfg
[root@k8s-bin-node01 cfg]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@172.16.2.145:/data/kubernetes/cfg/
从node1拷贝kube-proxy,k8s,etcd的证书到node5
[root@k8s-bin-node01 ssl]# pwd
/data/etcd/ssl
[root@k8s-bin-node01 ssl]# scp ca-key.pem ca.pem server-key.pem server.pem root@172.16.2.145:/data/etcd/ssl/
[root@k8s-bin-node01 ssl]# pwd
/data/kubernetes/ssl
[root@k8s-bin-node01 ssl]# scp ca.pem ca-key.pem server.pem server-key.pem kube-proxy.pem kube-proxy-key.pem root@172.16.2.145:/data/kubernetes/ssl/
至此node5节点所需要文件已准备完毕
五:启动node5节点
修改kubelet和kube-proxy的配置文件,将ip修改为本机,如下
[root@node5 cfg]# vim kubelet
KUBELET_OPTS="--logtostderr=false \
--v=4 \
--log-dir=/data/kubernetes/logs/kubelet \
--address=172.16.2.145 \
--hostname-override=172.16.2.145 \
--kubeconfig=/data/kubernetes/cfg/kubelet.kubeconfig \
--experimental-bootstrap-kubeconfig=/data/kubernetes/cfg/bootstrap.kubeconfig \
--cert-dir=/data/kubernetes/ssl \
--cluster-dns=10.0.0.2 \
--cluster-domain=cluster.local \
--fail-swap-on=false \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
[root@node5 cfg]# vim kube-proxy
KUBE_PROXY_OPTS="--logtostderr=false \
--log-dir=/data/kubernetes/logs/kube-proxy \
--v=4 \
--hostname-override=172.16.2.145 \
--cluster-cidr=10.0.0.0/24 \
--proxy-mode=ipvs \
--kubeconfig=/data/kubernetes/cfg/kube-proxy.kubeconfig"
启动 docker,flannel,kubelet,kube-proxy,启动完成后,查看kubelet的状态是否正常,如果正常再进行下一步
[root@node5 cfg]# systemctl start docker;systemctl enable docker
[root@node5 cfg]# systemctl start flanneld;systemctl enable flanneld
[root@node5 cfg]# systemctl start kubelet;systemctl enable kubelet
[root@node5 cfg]# systemctl start kube-proxy;systemctl enable kube-proxy
在master节点上查看csr,然后同意这个节点的加如,再稍等片刻查看node即可
[root@k8s-bin-master01 config]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-TLFAeDwUhcScypkpi-wokn7HpIJFr3Ex17IJSjnClik 3m kubelet-bootstrap Pending
[root@k8s-bin-master01 config]# kubectl certificate approve node-csr-TLFAeDwUhcScypkpi-wokn7HpIJFr3Ex17IJSjnClik
certificatesigningrequest.certificates.k8s.io/node-csr-TLFAeDwUhcScypkpi-wokn7HpIJFr3Ex17IJSjnClik approved
[root@k8s-bin-master01 config]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
172.16.2.107 Ready <none> 33d v1.17.4
172.16.2.108 Ready <none> 33d v1.17.4
172.16.2.109 Ready <none> 33d v1.17.4
172.16.2.110 Ready <none> 33d v1.17.4
172.16.2.145 Ready <none> 1s v1.17.4
六:可能会遇到的问题
kube-proxy报错如下
Failed to delete stale service IP 10.0.0.2 connections, error: error deleting connection tracking stat...found in $PATH
Hint: Some lines were ellipsized, use -l to show in full
解决办法如下
[root@node5 cfg] yum -y install conntrack;systemctl restart kube-proxy
kubectl get csr 发现没有节点申请
[root@k8s-bin-master01 ssl]# kubectl get csr
No resources found in default namespace.
解决办法,在master节点删除绑定bootstartp,然后重新创建,在去node上重新启动kubele
t
[root@k8s-bin-master01 ssl]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
更多推荐
所有评论(0)