本文首发自个人博客:https://blog.smile13.com/articles/2019/01/14/1547445838441.html

安装配置node节点(所有node节点同样的操作)

1.拉取相应的镜像

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.13.1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.13.1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.13.1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.13.1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.24
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.6

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.13.1 k8s.gcr.io/kube-apiserver:v1.13.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.13.1 k8s.gcr.io/kube-controller-manager:v1.13.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.13.1 k8s.gcr.io/kube-scheduler:v1.13.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.13.1 k8s.gcr.io/kube-proxy:v1.13.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6

2.执行加入语句,把node加入集群

kubeadm join k8s-cluster.smile13.com:6443 --token mk3jfk.tducuowrll39qun8 --discovery-token-ca-cert-hash sha256:a66bb6ff4f065bfc7918c67832f56892071575af0c2039aff20a4fcf25244aaf

###忘记token或者token过期,需要重新生成token:
>1.在master上执行命令:kubeadm token create
>2.获取ca证书`sha256`编码hash值:openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null |   openssl dgst -sha256 -hex | sed 's/^.* //'

3.移除Node

##在master节点上执行:
kubectl drain k8s07 --delete-local-data --force --ignore-daemonsets
kubectl delete node k8s07

##在k8s07上
kubeadm reset
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1 rm -rf /var/lib/cni/

##在其他节点上执行
kubectl delete node k8s07

4.kube-proxy开启ipvs(任意一个master上操作)

##修改ConfigMap的kube-system/kube-proxy中的config.conf,mode: “ipvs”
[root@k8s01 ~]# kubectl edit cm kube-proxy -n kube-system

##重启所有的kube-proxy pod
[root@k8s01 ~]# kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'
pod "kube-proxy-8r9bq" deleted
pod "kube-proxy-9k2zn" deleted
pod "kube-proxy-bv2bf" deleted
pod "kube-proxy-rkwg8" deleted
pod "kube-proxy-sq4lt" deleted
pod "kube-proxy-tvhkx" deleted
pod "kube-proxy-x6v57" deleted

##查看kube-proxy状态
[root@k8s01 ~]# kubectl get pod -n kube-system | grep kube-proxy
kube-proxy-5r7fp                        1/1     Running   0          31s
kube-proxy-895rz                        1/1     Running   0          23s
kube-proxy-ggkrw                        1/1     Running   0          19s
kube-proxy-gszff                        1/1     Running   0          35s
kube-proxy-jl552                        1/1     Running   0          60s
kube-proxy-n72bp                        1/1     Running   0          83s
kube-proxy-pr7f9                        1/1     Running   0          72s

[root@k8s01 ~]# kubectl logs kube-proxy-5r7fp -n kube-system
I0119 15:44:25.451787       1 server_others.go:189] Using ipvs Proxier.
W0119 15:44:25.452270       1 proxier.go:365] IPVS scheduler not specified, use rr by default
I0119 15:44:25.452458       1 server_others.go:216] Tearing down inactive rules.
I0119 15:44:25.503432       1 server.go:464] Version: v1.13.1
I0119 15:44:25.511221       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0119 15:44:25.511419       1 config.go:202] Starting service config controller
I0119 15:44:25.511433       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I0119 15:44:25.511457       1 config.go:102] Starting endpoints config controller
I0119 15:44:25.511518       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I0119 15:44:25.611687       1 controller_utils.go:1034] Caches are synced for service config controller
I0119 15:44:25.611695       1 controller_utils.go:1034] Caches are synced for endpoints config controller

##日志中打印出了Using ipvs Proxier,说明ipvs模式已经开启。

 

版权声明:本文为博主原创文章,转载请注明出处!

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐