这种问题通常只会出现在VMware/VirtualBox虚拟机搭建demo的时候:

此时需要重新初始化集群

1. master节点

kubeadm reset -f:

[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-  config -o yaml'
[preflight] Running pre-flight checks
[reset] Removing info for node "k8s-master" from the ConfigMap "kubeadm-config" in the "kube-system" Namespace
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernete     s/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kube       rnetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/sche duler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /var/li           b/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

kubeadm init \
--apiserver-advertise-address=192.168.18.2 \
--control-plane-endpoint=cluster-endpoint \
--image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
--kubernetes-version v1.20.9 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=172.31.0.0/16

这里有几处需要特别注意的地方:

1). apiserver-advertise-address => master节点的地址

2). cluster-endpoint需要配置在/etc/hosts文件中

 3). service-cidr, pod-network-cidr和host主机网络段(ip addr)不能重叠

执行结果如下图表示master节点初始化成功:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join cluster-endpoint:6443 --token bnjjg9.swd63b8h0to6e39d \
    --discovery-token-ca-cert-hash sha256:4d0939929205ba9abc2cc985e4530f875bd546e3a289f8ff                                 1a2f9b31cf00723a \

    --control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join cluster-endpoint:6443 --token bnjjg9.swd63b8h0to6e39d \
    --discovery-token-ca-cert-hash sha256:4d0939929205ba9abc2cc985e4530f875bd546e3a289f8ff1a2f9b31cf00723a 

此时还需要手动清理:$HOME/.kube/config下的文件:

sudo rm -rf $HOME/.kube/config

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

 再执行kubectl get nodes,可以看见master节点已经正常

2. worker节点

1). 同样执行:kubeadm reset -f,显示和master节点一致

2). 修改节点的/etc/hosts中的主机名映射,与master节点保持一致,目的是,执行

kubeadm join的时候可以正确的解析cluster-endpoint

3). 执行:kubeadm join =>

kubeadm join cluster-endpoint:6443 --token bnjjg9.swd63b8h0to6e39d \
    --discovery-token-ca-cert-hash sha256:4d0939929205ba9abc2cc985e4530f875bd546e3a289f8ff1a2f9b31cf00723a 

4). 将master节点$HOME/.kube/config文件拷贝到worker节点的/root/.kube/config

在执行kubectl get nodes,

整个集群初始化完成

非常感谢此博主的链接:https://www.jianshu.com/p/2027b4aa2997

5). 需要重新安装cni容器网络插件

kubectl  apply  -f  calico.yaml

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐