K8s集群删除添加node节点步骤及问题解决
K8s 删除node节点添加node节点步骤。k8s的master节点坏掉,重新部署完master节点后,node节点无法加入集群的解决办法。
·
删除node节点
1.先在主节点上查询该node节点上的pod信息;
kubectl get nodes -o wide
2.在主节点上驱逐该node节点上的Pod;
kubectl drain nodeXXX --delete-local-data --force --ignore-daemonsets
3.在主节点上删除该node节点;
kubectl delete node nodeXXX
4.在该node节点上执行下述命令:
kubeadm reset #重置
systemctl stop kubelet
systemctl stop docker
rm -rf /var/lib/cni/
rm -rf /var/lib/kubelet/*
rm -rf /etc/cni/
ifconfig cni0 down
ifconfig flannel.1 down
ifconfig docker0 down
ip link delete cni0
ip link delete flannel.1
systemctl start docker
systemctl start kubelet
主节点添加node节点
node节点上删除证书(之前的,新集群忽略)
rm -f /opt/kubernetes/ssl/kubelet*
重启kubelet
systemctl restart kubelet
在主节点上生成证书
kubeadm token create --print-join-command
kubeadm token create --print-join-command --ttl=0 # 此命令可以生成一个永久性的token
在node节点执行上述指令生成的指令即可。例如:
kubeadm join 192.168.2.104:6443 --token f5pth3.fm2oupz828pl70s4 --discovery-token-ca-cert-hash sha256:5fd5bfbef1ed2d48b872cd95beb99a7a470582d28cc499ba577b8bbd9c2e7735
之后,在主节点上查看node检查是否添加成功。
问题描述
遇到的问题及解决办法:
k8s的master节点坏掉,重新部署完master节点后,node节点无法加入集群。
报错如下:
[root@k8s-node2 ~]# kubeadm join apiserver.demo:6443 --token ou7vjm.oceacziy0m2z69ak --discovery-token-ca-cert-hash sha256:3c05e8f1d775a126e78a7643d134e2a1cb378907c160fb8d6ca2d24dc0c30f14
[preflight] Running pre-flight checks.
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.12.1-ce. Max validated version: 17.03
[WARNING FileExisting-crictl]: crictl not found in system path
[preflight] Some fatal errors occurred:
[ERROR Port-10250]: Port 10250 is in use
[ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists12345678
解决办法:
在node节点清理之前的token,重新加入节点即可。
执行命令:kubeadm reset
更多推荐
已为社区贡献3条内容
所有评论(0)