k8s系列-14-部署Etcd集群
kubernetes各个组件都是无状态的服务,都存储在etcd中,为了保证集群的高可用,所以etcd也是需要高可用的。copy证书在上一篇中,我们生成了很多证书,也分发给了各个节点,本篇中我们就需要使用和etcd相关的证书了。PS:该操作在每一个节点上执行哈。# 创建存放证书的目录[root@node1 ~]# mkdir -pv /etc/etc
·
老板们,点个关注吧。
kubernetes各个组件都是无状态的服务,都存储在etcd中,为了保证集群的高可用,所以etcd也是需要高可用的。
copy证书
在上一篇中,我们生成了很多证书,也分发给了各个节点,本篇中我们就需要使用和etcd相关的证书了。
PS:该操作在每一个节点上执行哈。
# 创建存放证书的目录
[root@node1 ~]# mkdir -pv /etc/etcd /var/lib/etcd
mkdir: 已创建目录 "/etc/etcd"
mkdir: 已创建目录 "/var/lib/etcd"
[root@node1 ~]#
# 授权相关目录为700的权限
[root@node1 ~]# chmod 700 /var/lib/etcd
[root@node1 ~]# ll /var/lib/ | grep etcd
drwx------ 2 root root 6 3月 18 22:36 etcd
[root@node1 ~]#
# 拷贝证书
[root@node1 ~]# pwd
/root
[root@node1 ~]# cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/
[root@node1 ~]# ls /etc/etcd/
ca.pem kubernetes-key.pem kubernetes.pem
[root@node1 ~]#
配置etcd启动文件
PS:该步骤在每一个节点上执行。
# 获取对应节点的主机名
[root@node1 ~]# ETCD_NAME=$(hostname -s)
# 在每个节点服务器上,配置自己IP地址
[root@node1 ~]# ETCD_IP=192.168.112.130
# 配置节点所有的hostname
[root@node1 ~]# ETCD_NAMES=(node1 node2 node3)
# 配置节点所有的IP地址
[root@node1 ~]# ETCD_IPS=(192.168.112.130 192.168.112.131 192.168.112.132)
# 生成配置文件
[root@node1 ~]# cat <<EOF > /etc/systemd/system/etcd.service
[Unit]
Description=etcd
Documentation=https://github.com/coreos
[Service]
Type=notify
ExecStart=/usr/local/bin/etcd \\
--name ${ETCD_NAME} \\
--cert-file=/etc/etcd/kubernetes.pem \\
--key-file=/etc/etcd/kubernetes-key.pem \\
--peer-cert-file=/etc/etcd/kubernetes.pem \\
--peer-key-file=/etc/etcd/kubernetes-key.pem \\
--trusted-ca-file=/etc/etcd/ca.pem \\
--peer-trusted-ca-file=/etc/etcd/ca.pem \\
--peer-client-cert-auth \\
--client-cert-auth \\
--initial-advertise-peer-urls https://${ETCD_IP}:2380 \\
--listen-peer-urls https://${ETCD_IP}:2380 \\
--listen-client-urls https://${ETCD_IP}:2379,https://127.0.0.1:2379 \\
--advertise-client-urls https://${ETCD_IP}:2379 \\
--initial-cluster-token etcd-cluster-0 \\
--initial-cluster ${ETCD_NAMES[0]}=https://${ETCD_IPS[0]}:2380,${ETCD_NAMES[1]}=https://${ETCD_IPS[1]}:2380,${ETCD_NAMES[2]}=https://${ETCD_IPS[2]}:2380 \\
--initial-cluster-state new \\
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
[root@node1 ~]#
启动ETCD服务
PS:该步骤在每一个节点上执行。
# 重新加载linux服务列表
[root@node1 ~]# systemctl daemon-reload
# 将etcd放入自启动
[root@node1 ~]# systemctl enable etcd
# 重启etcd服务
[root@node1 ~]# systemctl restart etcd
# 查看etcd节点状态
[root@node1 ~]# systemctl status etcd
● etcd.service - etcd
Loaded: loaded (/etc/systemd/system/etcd.service; enabled; vendor preset: disabled)
Active: active (running) since 五 2022-03-18 22:56:00 CST; 1min 16s ago
Docs: https://github.com/coreos
Main PID: 3663 (etcd)
Tasks: 8
CGroup: /system.slice/etcd.service
└─3663 /usr/local/bin/etcd --name node1 --cert-file=/etc/etcd/kubernetes.pem --key-file=/etc/etcd/kubernetes-key.pem --peer-cert-file=/etc/etcd/kubernetes.pem...
3月 18 22:56:00 node1 systemd[1]: Started etcd.
3月 18 22:56:00 node1 etcd[3663]: health check for peer 2deb614427922fac could not connect: dial tcp 192.168.112.132:2380: connect: connection refused
3月 18 22:56:00 node1 etcd[3663]: health check for peer 2deb614427922fac could not connect: dial tcp 192.168.112.132:2380: connect: connection refused
3月 18 22:56:00 node1 etcd[3663]: peer 2deb614427922fac became active
3月 18 22:56:00 node1 etcd[3663]: established a TCP streaming connection with peer 2deb614427922fac (stream MsgApp v2 writer)
3月 18 22:56:00 node1 etcd[3663]: established a TCP streaming connection with peer 2deb614427922fac (stream Message writer)
3月 18 22:56:00 node1 etcd[3663]: established a TCP streaming connection with peer 2deb614427922fac (stream Message reader)
3月 18 22:56:00 node1 etcd[3663]: established a TCP streaming connection with peer 2deb614427922fac (stream MsgApp v2 reader)
3月 18 22:56:04 node1 etcd[3663]: updated the cluster version from 3.0 to 3.4
3月 18 22:56:04 node1 etcd[3663]: enabled capabilities for version 3.4
[root@node1 ~]#
验证etcd集群
剩余内容请转至VX公众号 “运维家” ,回复 “121” 查看。
更多推荐
已为社区贡献46条内容
所有评论(0)