在CentOS7上用kubeadm安装多Master节点的高可用Kubernetes集群
前言之前写过一篇文章使用kubeadm一键部署kubernetes集群 v1.10.3 v1.11.0 v1.13.0 和编写了一个一键部署单Master节点的Kubernetes集群的...
前言
之前写过一篇文章使用kubeadm一键部署kubernetes集群 v1.10.3 v1.11.0 v1.13.0 和编写了一个一键部署单Master节点的Kubernetes集群的开源项目k8s-deploy。
现在Kubernetes已更新到v1.19,所以重新更新了k8s-deploy项目,支持部署多Master节点的高可用Kubernetes集群。
在正式安装前,推荐先阅读k8s-deploy项目和Troubleshooting中的常见问题。
部署规划
资源规划
全部节点都在一个低延迟的子网中:
主机名 | IP | 规格 | 说明 |
---|---|---|---|
k8s-lb-01 | 192.168.87.111 | CentOS7 x86_64 2vCPU/2G | HAProxy负载均衡 |
k8s-master-01 | 192.168.87.121 | CentOS7 x86_64 2vCPU/2G | Master节点1 |
k8s-master-02 | 192.168.87.122 | CentOS7 x86_64 2vCPU/2G | Master节点2 |
k8s-worker-01 | 192.168.87.131 | CentOS7 x86_64 2vCPU/2G | Worker节点1 |
k8s-worker-02 | 192.168.87.132 | CentOS7 x86_64 2vCPU/2G | Worker节点2 |
集群规划
计划部署“1负载均衡 + 2 Master节点 + 2 Worker节点”的高可用Kubernetes集群:
Docker 19.03.11
Kubernetes v1.19.3
Calico网络组件
HAProxy负载均衡
集群Control-Plane Endpoint:
192.168.87.111:6443
集群API Server(s):
192.168.87.121:6443
192.168.87.122:6443
如果要提高整个集群的高可用性,可以采用云供应商提供的高可用负载均衡实例,或者采用“多HAProxy + Keepalived + VIP“的方案。参见:https://github.com/kubernetes/kubeadm/blob/master/docs/ha-considerations.md#options-for-software-load-balancing
部署负载均衡服务器
安装HAProxy:
yum install haproxy -y
编辑/etc/haproxy/haproxy.cfg
,删掉默认的代理设置,添加负载均衡反向代理到Kubernetes集群Master节点的设置:
#---------------------------------------------------------------------
# apiserver frontend which proxys to the masters
#---------------------------------------------------------------------
frontend apiserver
bind 192.168.87.111:6443
mode tcp
option tcplog
default_backend apiserver
#---------------------------------------------------------------------
# round robin balancing for apiserver
#---------------------------------------------------------------------
backend apiserver
option httpchk GET /healthz
http-check expect status 200
mode tcp
option ssl-hello-chk
balance roundrobin
server k8s-master-01 192.168.87.121:6443 check
server k8s-master-02 192.168.87.122:6443 check
以服务方式运行HAProxy:
systemctl daemon-reload
systemctl enable haproxy
systemctl start haproxy
查看HAProxy服务状态:
systemctl status haproxy -l
部署第一个节点
拉取k8s-deploy项目
# 安装Git
yum install git -y
# 拉取k8s-deploy项目
mkdir -p ~/k8s
cd ~/k8s
git clone https://github.com/cookcodeblog/k8s-deploy.git
# 增加脚本的可执行文件权限
cd k8s-deploy/kubeadm_v1.19.3
find . -name '*.sh' -exec chmod u+x {} \;
安装Kubernetes
# 安装前检查和配置
bash 01_pre_check_and_configure.sh
# 安装Docker
bash 02_install_docker.sh
# 安装kubeadm, kubectl, kubelet
bash 03_install_kubernetes.sh
# 拉取Kubernetes集群要用到的镜像
bash 04_pull_kubernetes_images_from_aliyun.sh
# 拉取Calico网络组件的镜像
bash 04_pull_calico_images.sh
克隆服务器作为基准镜像
建议克隆这台服务器作为基准镜像,来加快后面的Kubernetes集群节点的部署。
部署第一个Master节点
初始化集群
cd ~/k8s
cd k8s-deploy/kubeadm_v1.19.3
# 05_kubeadm_init.sh <CONTROL_PLANE_ENDPOINT> <API_SERVER_IP>
bash 05_kubeadm_init.sh 192.168.87.111:6443 192.168.87.121
复制kubeadm init
的输出日志,后面往集群中加入Master节点和Worker节点时需要用到。
安装Calico网络组件
bash 06_install_calico.sh
部署Worker节点
运行kubeadm init
中打印的日志中关于加入“worker node”的命令。
示例:
# Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.87.111:6443 --token nrnmhq.g8thtjhrqlq3jodl \
--discovery-token-ca-cert-hash sha256:ccc4b29d0d756ba72e07a0e84c7f65a771db593a99c212ee0d1e963783cdb1e8
让Worker节点上的kubectl
命令生效:
bash enable_kubectl_worker.sh
部署第二个Master节点
运行kubeadm init
中打印的日志中关于加入“control-plane node"的命令。
示例:
# You can now join any number of the control-plane node running the following command on each as root:
kubeadm join 192.168.87.111:6443 --token nrnmhq.g8thtjhrqlq3jodl \
--discovery-token-ca-cert-hash sha256:ccc4b29d0d756ba72e07a0e84c7f65a771db593a99c212ee0d1e963783cdb1e8 \
--control-plane --certificate-key a1b3d7df2881fe005149be8d8a359df2d18ec9af8ad92cd89688fbf185687f9f
Please note that the certificate-key gives access to cluster sensitive data, keep it secret! As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use "kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
让Master节点上的kubectl
命令生效:
bash enable_kubectl_master.sh
查看集群部署情况
# Display cluster info
kubectl cluster-info
# Nodes
kubectl get nodes
# Display pds
kubectl get pods --all-namespaces -o wide
# Check pods status incase any pods are not in running status
kubectl get pods --all-namespaces | grep -v Running
Troubleshooting
用VMWare试验时怎么设置主机名和IP
参见:
使用工作站和VMware搭建集群测试环境
官方Yum源安装太慢,可使用阿里云Yum源
参见:
使用国内Yum源
服务器上不了网,可通过HTTP Proxy正向代理上网
如果节点机器不能上网,可以通过设置代理来上网,这样可以方便在线yum安装、下载文件和拉取镜像。
参见:
在CentOS7上设置Squid HTTP Proxy正向代理
如果使用了http proxy,在初始化Kubernetes集群前需要重置全局的
http_proxy
和https_proxy
,并运行systemctl restart kubelet
重启kubelet服务。
设置docker pull proxy
使用docker pull
通过代理拉取Docker镜像,参见:
https://docs.docker.com/config/daemon/systemd/#httphttps-proxy
/etc/systemd/system/docker.service.d/http-proxy.conf
示例:
[Service]
Environment="HTTP_PROXY=http://172.31.240.127:3128"
Environment="HTTPS_PROXY=http://172.31.240.127:3128"
Environment="NO_PROXY=127.0.0.1,localhost,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16"
注意,如果使用Squid作为proxy server,/etc/systemd/system/docker.service.d/http-proxy.conf
中的HTTP_PROXY和HTTPS_PROXY应设置成一样,都为http://proxy_server_ip:3128
(http)。
如果设置了HTTPS_PROXY=https://172.31.240.127:3128
(https),则会报错:
proxyconnect tcp: tls: first record does not look like a TLS handshake
参见:
https://github.com/docker/for-linux/issues/415
从Docker Hub拉取镜像太慢,可使用阿里云镜像加速器
参见:
Docker国内Yum源和国内镜像仓库
拉取Google镜像太慢,可使用阿里云镜像仓库
如果无法从Google Kubernetes Yum源安装,可以通过阿里云Kubernetes Yum源来安装。
参见:
Kubernetes国内镜像、下载安装包和拉取gcr.io镜像
https://github.com/cookcodeblog/docker-library
其他内容
Kubernetes兼容的Docker版本
Kubernetes版本 | Docker版本 |
---|---|
v1.15 | 1.13.1 , 17.03 , 17.06 , 17.09 , 18.06 , 18.09 |
v1.16 | 1.13.1 , 17.03 , 17.06 , 17.09 , 18.06 , 18.09 |
v1.17 | 1.13.1 , 17.03 , 17.06 , 17.09 , 18.06 , 18.09 , 19.03 |
v1.18 | 1.13.1 , 17.03 , 17.06 , 17.09 , 18.06 , 18.09 , 19.03 |
v1.19 | 1.13.1 , 17.03 , 17.06 , 17.09 , 18.06 , 18.09 , 19.03 |
参考文档
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/
https://github.com/kubernetes/kubeadm/blob/master/docs/ha-considerations.md#options-for-software-load-balancing
https://kubernetes.io/docs/setup/production-environment/container-runtimes/#docker
https://docs.docker.com/engine/install/centos/
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network
https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-networking-model
https://kubernetes.io/docs/concepts/cluster-administration/addons/
https://docs.projectcalico.org/getting-started/kubernetes/
https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/
Unable to update cni config: No networks found in /etc/cni/net.d
https://docs.projectcalico.org/releases
https://github.com/kubernetes/kubernetes/issues/93472
https://octetz.com/docs/2019/2019-03-26-ha-control-plane-kubeadm/
https://blog.scottlowe.org/2019/08/12/converting-kubernetes-to-ha-control-plane/
https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/
https://blog.inkubate.io/install-and-configure-a-multi-master-kubernetes-cluster-with-kubeadm/
https://dockerlabs.collabnix.com/kubernetes/beginners/Install-and-configure-a-multi-master-Kubernetes-cluster-with-kubeadm.html
https://www.unixmen.com/configure-high-available-load-balancer-haproxy-keepalived/
https://snapdev.net/2015/09/08/install-haproxy-and-keepalived-on-centos-7-for-mariadb-cluster/
https://www.unixmen.com/installing-haproxy-for-load-balancing-on-centos-7
https://blog.inkubate.io/install-and-configure-a-multi-master-kubernetes-cluster-with-kubeadm/
https://github.com/kubernetes/kubeadm/blob/master/docs/ha-considerations.md#options-for-software-load-balancing
kubelet服务启动失败,错误代码255
更多推荐
所有评论(0)