【k8s】单master到多master平滑升级
由于单master节点用于线上环境发生故障时会导致集群整个不可用,生产环境中有很大风险所以要把集群升级为多master模式,其中任何一个节点宕机都不影响集群正常运行。将 kube-system 中的 kubeadm-config 配置导出,并修改配置。因为被删除的节点不是很“干净”,所以还需要把/etc/kubernets下的内容移除。根据上面操作的信息屏凑一个加入主节点的命令,在新加入的2个主节
升级原因
由于单master节点用于线上环境发生故障时会导致集群整个不可用,生产环境中有很大风险所以要把集群升级为多master且高可用,其中任何一个节点宕机都不影响集群正常运行。
本机环境
mater control-plane,master v1.21.0
node1 node v1.21.0
node2 node v1.21.0
计划将node1和node2升级为master。为了api-server高可用,在另两台机器上部署了haproxy用来代理api-server。后文用inspot:6443表示新的api-server 地址。
备份k8s环境
以防万一备份k8s配置
cp -a /etc/kubernetes /etc/kubernetes.bak
申请新的api-server证书
将 kube-system 中的 kubeadm-config 配置导出,并修改配置
kubectl -n kube-system get configmap kubeadm-config -o jsonpath='{.data.ClusterConfiguration}' > kubeadm.yaml
旧配置
apiServer:
extraArgs:
authorization-mode: Node,RBAC
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: master:6443
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.21.0
networking:
dnsDomain: cluster.local
podSubnet: 192.168.0.0/16
serviceSubnet: 10.96.0.0/12
scheduler: {}
新的配置
apiServer:
certSANs:
- inspot
- master
- node1
- node2
- 10.196.96.11
- 10.196.96.12
- 10.196.96.17
extraArgs:
authorization-mode: Node,RBAC
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: inspot:6443
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.21.0
networking:
dnsDomain: cluster.local
podSubnet: 192.168.0.0/16
serviceSubnet: 10.96.0.0/12
scheduler: {}
生成apiserver证书
移除并备份旧证书
mv /etc/kubernetes/pki/apiserver.{crt,key} /备份目录
生成新的证书
kubeadm init phase certs apiserver --config kubeadm.yaml
验证证书内容
openssl x509 -in /etc/kubernetes/pki/apiserver.crt -text
发现新的证书已经更新的dns部分
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
20:be:05:25:b1:76:88:59:48:e1:bb:60:36:06:8e:8e:33:d2:e4:e2
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN = kubernetes
Validity
Not Before: Aug 30 02:16:10 2022 GMT
Not After : Aug 27 02:16:10 2032 GMT
Subject: CN = kube-apiserver
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
RSA Public-Key: (2048 bit)
Modulus:
00:a9:64:9e:8d:f7:93:b0:48:aa:b0:21:d9:db:ea:
17:19:ae:2e:f0:40:b0:25:d8:c2:6c:26:7a:2e:48:
f7:93:51:96:d4:1c:d8:0a:b2:57:bc:49:f5:4a:73:
d3:f6:6c:f2:b0:98:14:05:91:47:87:11:a4:0e:de:
0c:6e:4e:6c:59:37:2f:50:0c:23:53:0b:18:6e:66:
c0:c7:64:88:c4:2d:93:ce:13:3b:c4:0f:89:68:2a:
48:09:ac:91:46:5b:c9:f8:f1:48:10:3b:10:fd:70:
20:d0:b4:a2:62:c1:0b:b0:91:2c:10:83:06:e6:df:
fc:54:10:da:32:d3:db:79:e7:3e:55:c2:41:92:33:
d1:3e:14:9d:d7:c4:f1:02:40:88:e7:2d:97:b9:26:
20:a0:9e:65:05:d6:51:90:22:98:3a:19:ce:56:cf:
68:d0:d5:98:33:e0:af:5c:df:20:f8:b6:83:60:2b:
fb:af:2b:86:33:c4:0b:b6:a6:1e:d9:ce:14:4e:54:
48:12:10:2d:51:0f:db:bb:9b:e0:e0:4d:a9:17:b2:
8b:a3:44:74:15:75:76:92:4e:95:34:9c:b7:2b:91:
6c:8f:80:d9:c4:5c:1e:24:39:59:40:fb:5b:9e:2e:
bb:fc:87:e3:47:cd:1c:05:22:d3:b0:62:9c:1e:a1:
1d:f7
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature, Key Encipherment
X509v3 Extended Key Usage:
TLS Web Server Authentication
X509v3 Subject Alternative Name:
DNS:inspot, DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, DNS:master, DNS:node1, DNS:node2, IP Address:10.96.0.1, IP Address:10.196.96.11, IP Address:10.196.96.12, IP Address:10.196.96.17
Signature Algorithm: sha256WithRSAEncryption
7b:11:fb:0e:3d:9c:e7:79:d9:e9:1a:af:06:62:d9:39:ee:fe:
d0:26:bb:5d:7d:51:b5:0c:f1:f0:e9:db:d6:37:46:c4:39:cc:
38:1e:20:99:60:d1:21:f8:5c:9d:73:81:ec:d6:cf:c1:24:25:
30:69:42:6c:80:04:8a:85:e7:f3:6e:de:01:61:f4:d4:e3:4f:
a5:d1:28:5f:43:e8:2d:15:f9:e2:29:51:15:87:d3:66:99:84:
3b:31:6b:62:ee:98:71:b1:25:bf:fc:c8:e2:b5:d5:0b:ee:a1:
a2:59:b6:f8:fb:d8:94:7c:6a:f8:3e:f3:bc:2d:e1:01:68:58:
34:06:5e:ab:88:07:44:1f:bb:a1:83:2d:bb:ea:58:c3:ee:ed:
14:9f:1b:d3:a7:56:bd:08:a4:a5:74:2a:ea:9b:45:d1:e4:fd:
5f:40:00:30:be:dc:5a:72:e6:c5:4b:e3:6f:3b:a3:84:d8:86:
84:2f:6f:a4:d4:45:a7:15:14:ff:18:8e:ff:4b:42:4a:eb:b0:
18:fb:32:95:ae:d2:24:dc:c2:5c:35:2e:68:df:4b:51:46:ae:
8d:8e:9c:44:d0:09:a6:eb:3a:76:30:3f:f5:8c:f3:4a:59:1c:
cc:e7:d9:ac:f2:36:07:ca:96:51:0e:0c:3d:5b:da:f4:ac:36:
50:2f:68:a7
-----BEGIN CERTIFICATE-----
MIIDmDCCAoCgAwIBAgIUIL4FJbF2iFlI4btgNgaOjjPS5OIwDQYJKoZIhvcNAQEL
BQAwFTETMBEGA1UEAxMKa3ViZXJuZXRlczAeFw0yMjA4MzAwMjE2MTBaFw0zMjA4
MjcwMjE2MTBaMBkxFzAVBgNVBAMMDmt1YmUtYXBpc2VydmVyMIIBIjANBgkqhkiG
9w0BAQEFAAOCAQ8AMIIBCgKCAQEAqWSejfeTsEiqsCHZ2+oXGa4u8ECwJdjCbCZ6
Lkj3k1GW1BzYCrJXvEn1SnPT9mzysJgUBZFHhxGkDt4Mbk5sWTcvUAwjUwsYbmbA
x2SIxC2TzhM7xA+JaCpICayRRlvJ+PFIEDsQ/XAg0LSiYsELsJEsEIMG5t/8VBDa
MtPbeec+VcJBkjPRPhSd18TxAkCI5y2XuSYgoJ5lBdZRkCKYOhnOVs9o0NWYM+Cv
XN8g+LaDYCv7ryuGM8QLtqYe2c4UTlRIEhAtUQ/bu5vg4E2pF7KLo0R0FXV2kk6V
NJy3K5Fsj4DZxFweJDlZQPtbni67/IfjR80cBSLTsGKcHqEd9wIDAQABo4HbMIHY
MA4GA1UdDwEB/wQEAwIFoDATBgNVHSUEDDAKBggrBgEFBQcDATCBsAYDVR0RBIGo
MIGlggZpbnNwb3SCCmt1YmVybmV0ZXOCEmt1YmVybmV0ZXMuZGVmYXVsdIIWa3Vi
ZXJuZXRlcy5kZWZhdWx0LnN2Y4Ika3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVz
dGVyLmxvY2Fsggt0cm4wMS0wNS0wMoILdHJuMDMtMDYtMDOCC3RybjA1LTAyLTAx
hwQKYAABhwQKxGALhwQKxGAMhwQKxGARMA0GCSqGSIb3DQEBCwUAA4IBAQB7EfsO
PZznednpGq8GYtk57v7QJrtdfVG1DPHw6dvWN0bEOcw4HiCZYNEh+Fydc4Hs1s/B
JCUwaUJsgASKhefzbt4BYfTU40+l0ShfQ+gtFfniKVEVh9NmmYQ7MWti7phxsSW/
/MjitdUL7qGiWbb4+9iUfGr4PvO8LeEBaFg0Bl6riAdEH7uhgy276ljD7u0UnxvT
p1a9CKSldCrqm0XR5P1fQAAwvtxacubFS+NvO6OE2IaEL2+k1EWnFRT/GI7/S0JK
67AY+zKVrtIk3MJcNS5o30tRRq6NjpxE0Amm6zp2MD/1jPNKWRzM59ms8jYHypZR
Dgw9W9r0rDZQL2in
-----END CERTIFICATE-----
新增的内容回填 kubeadm-config
kubectl edit cm kubeadm-config -n kube-system
将这部分再写一遍
certSANs:
- inspot
- master
- node1
- node2
- 10.196.96.11
- 10.196.96.12
- 10.196.96.17
替换api-server地址
# 模版
sed -i "s/oldapiservername/newapiservername/g" `grep oldapiservername -rl /etc/kubernetes`
# 示例
sed -i "s/10.196.96.11:6443/inspot:6443/g" `grep "10.196.96.11:6443" -rl /etc/kubernetes`
修改默认配置文件中的api-server地址
vi ~/.kube/config
修改kubeadm-config中api-server地址
kubectl edit cm kubeadm-config -n kube-system
controlPlaneEndpoint属性inspot:6443
重启api-server等相关容器
docker ps |grep -E 'k8s_kube-apiserver|k8s_kube-controller-manager|k8s_kube-scheduler|k8s_etcd_etcd' | awk -F ' ' '{print $1}' |xargs docker restart
更新cluster-info
kubectl -n kube-public edit cm cluster-info
server: https://inspot:6443
验证 cluster-info,显示此输出即可
root@master:~# kubectl cluster-info
Kubernetes control plane is running at https://inspot:6443
CoreDNS is running at https://inspot:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
上传证书到集群,此处输出的key在后面有用
kubeadm init phase upload-certs --upload-certs --config kubeadm.yaml
I0612 02:21:45.938155 103642 version.go:252] remote version is much newer: v1.24.1; falling back to: stable-1.19
W0612 02:21:46.570837 103642 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
# 此处生成的key在后续master加入集群时可用
1b594f631654ac0f221dd0cf593605a30b8f41eee480562d2a3103ab667cfcc7
下面的命令会生成一个token和命令后面会用到
kubeadm token create --print-join-command --config kubeadm.yaml
根据上面操作的信息屏凑一个加入主节点的命令,在新加入的2个主节点执行
kubeadm join inspot:6443 \
--token f27w7m.adelvl3waw9kqdhp \ # 这个token可以通过kubeadm token create创建
--discovery-token-ca-cert-hash <第二步生成的token> \
--control-plane --certificate-key <第一步生成的key>
因为其他两个node节点在使用中,所以需要先设置节点不可调度,再驱逐删除那个节点
kubectl cordon node1
# --delete-emptydir-data会删除statefulset(有状态应用)
kubectl drain node1 --ignore-daemonsets --force --delete-emptydir-data
kubectl delete node node1
因为被删除的节点不是很“干净”,所以还需要把/etc/kubernets下的内容移除
kubeadm reset
mv /etc/kubernets/* 备份目录
mv /var/lib/etcd/* 备份目录
随后执行上面拼凑的kubeadm join即可!
更多推荐
所有评论(0)