Kubeadm部署k8s集群后,scheduler和controller-manager为Unhealthy
Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused
·
目录
2.1 修改kube-controller-manager.yaml配置文件,将port=0注释
2.2 修改kube-scheduler.yaml配置文件,将port=0注释掉
1、问题现象
[root@k8s-master01 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Unhealthy Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused
scheduler Unhealthy Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused
etcd-0 Healthy {"health":"true"}
2、解决方法
2.1 修改kube-controller-manager.yaml配置文件,将port=0注释
[root@k8s-master01 ~]# vim /etc/kubernetes/manifests/kube-controller-manager.yaml
....
spec:
containers:
- command:
- kube-controller-manager
- --allocate-node-cidrs=true
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --bind-address=127.0.0.1
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --cluster-cidr=172.168.0.0/12
- --cluster-name=kubernetes
- --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
- --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
- --controllers=*,bootstrapsigner,tokencleaner
- --kubeconfig=/etc/kubernetes/controller-manager.conf
- --leader-elect=true
#- --port=0
....
2.2 修改kube-scheduler.yaml配置文件,将port=0注释掉
[root@k8s-master01 ~]# vim /etc/kubernetes/manifests/kube-scheduler.yaml
....
spec:
containers:
- command:
- kube-scheduler
- --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
- --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
- --bind-address=127.0.0.1
- --kubeconfig=/etc/kubernetes/scheduler.conf
- --leader-elect=true
#- --port=0
....
2.3 在master节点上重启kubelet
[root@k8s-master01 ~]# systemctl restart kubelet.service
[root@k8s-master01 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true"}
恢复正常!!!
更多推荐
已为社区贡献3条内容
所有评论(0)