k8s删除添加node节点
在已有k8s云平台中误删除node节点,然后将误删除的节点添加进集群中。如果是一台新服务器必须还要安装docker和k8s基础组件。1.查看节点数和删除node节点(master节点)[root@k8s01 ~]# kubectl get nodesNAME STATUS ROLES AGE VERSIONk8s01 Ready master...
在已有k8s云平台中误删除node节点,然后将误删除的节点添加进集群中。如果是一台新服务器必须还要安装docker和k8s基础组件。
1.查看节点数和删除node节点(master节点)
[root@k8s01 ~]# kubectl  get nodes
  NAME    STATUS   ROLES    AGE   VERSION
  k8s01   Ready    master   40d   v1.15.3
  k8s02   Ready    <none>   40d   v1.15.3
  k8s03   Ready    <none>   40d   v1.15.3
[root@k8s01 ~]# kubectl  delete nodes k8s03
  node "k8s03" deleted
  [root@k8s01 ~]# kubectl  get nodes
  NAME    STATUS   ROLES    AGE   VERSION
  k8s01   Ready    master   40d   v1.15.3
  k8s02   Ready    <none>   40d   v1.15.3
  [root@k8s01 ~]#
2.在被删除的node节点清空集群信息
[root@k8s03 ~]# kubeadm reset
  [reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
  [reset] Are you sure you want to proceed? [y/N]: y
  [preflight] Running pre-flight checks
  W1017 15:43:41.491522    3010 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
  [reset] No etcd config found. Assuming external etcd
  [reset] Please, manually reset etcd to prevent further issues
  [reset] Stopping the kubelet service
  [reset] Unmounting mounted directories in "/var/lib/kubelet"
  [reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
  [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
  [reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]
The reset process does not reset or clean up iptables rules or IPVS tables.
  If you wish to reset iptables, you must do so manually.
  For example:
  iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
  to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
  Please, check the contents of the $HOME/.kube/config file.
  [root@k8s03 ~]# 
3.在master节点查看集群的token值
[root@k8s01 ~]# kubeadm token create --print-join-command
  kubeadm join 192.168.54.128:6443 --token mg4o13.4ilr1oi605tj850w     --discovery-token-ca-cert-hash sha256:363b5b8525ddb86f4dc157f059e40c864223add26ef53d0cfc9becc3cbae8ad3
  [root@k8s01 ~]#
4.将node节点重新添加到k8s集群中
[root@k8s03 ~]# kubeadm join 192.168.54.128:6443 --token mg4o13.4ilr1oi605tj850w     --discovery-token-ca-cert-hash sha256:363b5b8525ddb86f4dc157f059e40c864223add26ef53d0cfc9becc3cbae8ad3
  [preflight] Running pre-flight checks
   [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
   [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.2. Latest validated version: 18.09
  [preflight] Reading configuration from the cluster...
  [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
  [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
  [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  [kubelet-start] Activating the kubelet service
  [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
  * Certificate signing request was sent to apiserver and a response was received.
  * The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@k8s03 ~]#
5.查看整个集群状态
[root@k8s01 ~]# kubectl  get nodes
  NAME    STATUS   ROLES    AGE   VERSION
  k8s01   Ready    master   40d   v1.15.3
  k8s02   Ready    <none>   40d   v1.15.3
  k8s03   Ready    <none>   41s   v1.15.3
  [root@k8s01 ~]#
更多推荐
 
 



所有评论(0)