目录

一、查看证书过期时间

1.1 方式一

1.2 方式二

二、通过命令续期

2.1 查看证书有效期

2.2 备份原有数据

2.3 备份证书

2.4 更新证书

2.5 确认证书有效期

2.6 更新kubeconfig文件

2.7 更新客户端证书

2.8 重启相关的pod

2.9 检查当前集群状态是否正常

2.10 检查etcd集群状态

三、编译源码kubeadm,证书时间自定义

3.1 备份集群配置

3.2 获取对应的kubeadm源码

3.3 修改CA证书有效期

3.4 修改其他证书有效期

3.5 安装go环境进行编译

3.6 go设置国内代理

3.7 编译kubeadm

3.8 替换kubeadm指令

3.9 更新集群证书

3.10 更新kubeconfig文件

3.11 重启相关pod

3.12 替换admin文件

3.13 确认集群状态正常

3.14 确认证书更新成功


K8S CA证书是10年,但是组件证书的日期只有1年,为了证书一直可用状态需要更新,目前主流的一共有3种:

1、版本升级,只要升级就会让各个证书延期1年,官方设置1年有效期的目的就是希望用户在一年内能升级1次;
2、通过命令续期 (这种只能延长一年);
3、编译源码Kubeadm,证书有效期可自定义;

此文档采用K8s 1.22.0版本,不保证其他版本也适用,建议自行测试。

一、查看证书过期时间

1.1 方式一

kubeadm certs check-expiration

1.2 方式二

$ for item in `find /etc/kubernetes/pki -maxdepth 2 -name "*.crt"`;
do openssl x509 -in $item -text -noout| grep Not;
echo ======================$item===============;
done

也可以一个一个的进行查看:

$ openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text |grep ' Not '

二、通过命令续期

2.1 查看证书有效期

[root@k8s-master1][14:26:17][OK] ~ 
# kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[check-expiration] Error reading configuration from the Cluster. Falling back to default configuration

CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Apr 19, 2023 11:47 UTC   <invalid>                               no      
apiserver                  Apr 19, 2023 11:47 UTC   <invalid>       ca                      no      
apiserver-etcd-client      Apr 19, 2023 11:47 UTC   <invalid>       etcd-ca                 no      
apiserver-kubelet-client   Apr 19, 2023 11:47 UTC   <invalid>       ca                      no      
controller-manager.conf    Apr 19, 2023 11:47 UTC   <invalid>                               no      
etcd-healthcheck-client    Apr 19, 2023 11:47 UTC   <invalid>       etcd-ca                 no      
etcd-peer                  Apr 19, 2023 11:47 UTC   <invalid>       etcd-ca                 no      
etcd-server                Apr 19, 2023 11:47 UTC   <invalid>       etcd-ca                 no      
front-proxy-client         Apr 19, 2023 11:47 UTC   <invalid>       front-proxy-ca          no      
scheduler.conf             Apr 19, 2023 11:47 UTC   <invalid>                               no      

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Apr 16, 2032 11:47 UTC   8y              no      
etcd-ca                 Apr 16, 2032 11:47 UTC   8y              no      
front-proxy-ca          Apr 16, 2032 11:47 UTC   8y              no

可以看到,当前证书在2023年4月19号已经失效。

如果证书过期的话,就会出现以下情况:

$ kubectl get pod -n kube-system
Unable to connect to the server: x509: certificate has expired or is not yet valid

2.2 备份原有数据

# kubectl -n kube-system get cm kubeadm-config -o yaml > kubeadm-config.yaml

# cat kubeadm-config.yaml 
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 10.0.0.8
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  imagePullPolicy: IfNotPresent
  name: k8s-master1
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "10.0.0.250:16443"
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.22.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16
scheduler: {}

2.3 备份证书

$ cp -rp /etc/kubernetes /root/kubernetes_$(date +%F)
$ ls /etc/kubernetes_2023-04-26/
admin.conf  controller-manager.conf  kubelet.conf  manifests  pki  scheduler.conf

2.4 更新证书

此操作需要在所有的k8s-master节点执行

$ kubeadm certs renew all --config=/root/kubeadm-config.yaml

2.5 确认证书有效期

[root@k8s-master1][14:39:18][OK] ~/kubernetes 
# kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[check-expiration] Error reading configuration from the Cluster. Falling back to default configuration

CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Apr 23, 2024 06:39 UTC   364d                                    no      
apiserver                  Apr 23, 2024 06:39 UTC   364d            ca                      no      
apiserver-etcd-client      Apr 23, 2024 06:39 UTC   364d            etcd-ca                 no      
apiserver-kubelet-client   Apr 23, 2024 06:39 UTC   364d            ca                      no      
controller-manager.conf    Apr 23, 2024 06:39 UTC   364d                                    no      
etcd-healthcheck-client    Apr 23, 2024 06:39 UTC   364d            etcd-ca                 no      
etcd-peer                  Apr 23, 2024 06:39 UTC   364d            etcd-ca                 no      
etcd-server                Apr 23, 2024 06:39 UTC   364d            etcd-ca                 no      
front-proxy-client         Apr 23, 2024 06:39 UTC   364d            front-proxy-ca          no      
scheduler.conf             Apr 23, 2024 06:39 UTC   364d                                    no      

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Apr 16, 2032 11:47 UTC   8y              no      
etcd-ca                 Apr 16, 2032 11:47 UTC   8y              no      
front-proxy-ca          Apr 16, 2032 11:47 UTC   8y              no

2.6 更新kubeconfig文件

$ rm -f /etc/kubernetes/*.conf
$ kubeadm init phase kubeconfig all --config /root/kubeadm-config.yaml

2.7 更新客户端证书

$ cp $HOME/.kube/config{,.default}
$ cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ chown $(id -u):$(id -g) $HOME/.kube/config

2.8 重启相关的pod

$ docker ps |egrep "k8s_kube-apiserver|k8s_kube-controller-manager|k8s_kube-scheduler|k8s_etcd_etcd" | awk '{print $1}' | xargs docker rm -f

当然,也可以直接重启kubelet服务。

systemctl restart kubelet
systemctl status kubelet

2.9 检查当前集群状态是否正常

[root@k8s-master1][15:26:24][OK] ~ 
# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
controller-manager   Healthy   ok                              
scheduler            Healthy   ok                              
etcd-0               Healthy   {"health":"true","reason":""}   
etcd-1               Healthy   {"health":"true","reason":""}   
etcd-2               Healthy   {"health":"true","reason":""}   
[root@k8s-master1][15:26:28][OK] ~ 
# kubectl get node
NAME          STATUS   ROLES                  AGE    VERSION
k8s-master1   Ready    control-plane,master   369d   v1.22.0
k8s-master2   Ready    control-plane,master   369d   v1.22.0
k8s-master3   Ready    control-plane,master   369d   v1.22.0
k8s-node1     Ready    <none>                 369d   v1.22.0
k8s-node2     Ready    <none>                 369d   v1.22.0
k8s-node3     Ready    <none>                 369d   v1.22.0

2.10 检查etcd集群状态

1、检查当前etcd集群成员

etcdctl --endpoints=https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key member list -w table

+------------------+---------+-------------+------------------------+------------------------+------------+
|        ID        | STATUS  |    NAME     |       PEER ADDRS       |      CLIENT ADDRS      | IS LEARNER |
+------------------+---------+-------------+------------------------+------------------------+------------+
|  7b12475ec5537e0 | started | k8s-master2 |  https://10.0.0.9:2380 |  https://10.0.0.9:2379 |      false |
| 261f27b021c9631d | started | k8s-master1 |  https://10.0.0.8:2380 |  https://10.0.0.8:2379 |      false |
| 51603ca6dd5e6119 | started | k8s-master3 | https://10.0.0.10:2380 | https://10.0.0.10:2379 |      false |
+------------------+---------+-------------+------------------------+------------------------+------------+

2、检查当前etcdq集群健康状态

etcdctl --endpoints=https://10.0.0.8:2379,https://10.0.0.9:2379,https://10.0.0.10:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key endpoint status --write-out=table

+------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|        ENDPOINT        |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|  https://10.0.0.8:2379 | 261f27b021c9631d |   3.5.0 |   28 MB |      true |      false |        47 |     239893 |             239893 |        |
|  https://10.0.0.9:2379 |  7b12475ec5537e0 |   3.5.0 |   30 MB |     false |      false |        47 |     239893 |             239893 |        |
| https://10.0.0.10:2379 | 51603ca6dd5e6119 |   3.5.0 |   28 MB |     false |      false |        47 |     239893 |             239893 |        |
+------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

三、编译源码kubeadm,证书时间自定义

3.1 备份集群配置

# kubectl get cm -n kube-system kubeadm-config -o yaml>kubeadm-config.yaml
# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.0", GitCommit:"c2b5237ccd9c0f1d600d3072634ca66cefdf272f", GitTreeState:"clean", BuildDate:"2021-08-04T18:02:08Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"}

3.2 获取对应的kubeadm源码

$ wget https://github.com/kubernetes/kubernetes/archive/v1.22.0.tar.gz
$ tar zxvf v1.22.0.tar.gz

3.3 修改CA证书有效期

$ vim kubernetes-1.22.0/staging/src/k8s.io/client-go/util/cert/cert.go
 65                 NotBefore:             now.UTC(),
 66                 NotAfter:              now.Add(duration365d * 100).UTC(),  # 默认是10,改成100
 67                 KeyUsage:              x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature | x509.KeyUsageCertSign,
 68                 BasicConstraintsValid: true,
 69                 IsCA:                  true,

3.4 修改其他证书有效期

$ vim kubernetes-1.18.3/cmd/kubeadm/app/constants/constants.go
# 跳转至48行,修改如下(追加 * 100):
 48         CertificateValidity = time.Hour * 24 * 365 * 100

3.5 安装go环境进行编译

当你的k8s版本比较高时,请使用较高版本的go,否则在编译时会出现依赖报错。

$ wget https://dl.google.com/go/go1.20.3.linux-amd64.tar.gz
$ tar zxf go1.20.3.linux-amd64.tar.gz -C /usr/local/
$ echo 'export PATH=/usr/local/go/bin:$PATH' >> /etc/profile
$ source /etc/profile
$ go version
go version go1.20.3 linux/amd64

3.6 go设置国内代理

$ go env -w GOPROXY=https://goproxy.cn,direct
$ go env -w GOSUMDB="sum.golang.google.cn"

3.7 编译kubeadm

$ cd kubernetes-1.22.0/        # 进入kubeadm源码目录
$ make all WHAT=cmd/kubeadm GOFLAGS=-v

3.8 替换kubeadm指令

$ cp /usr/bin/kubeadm{,.bak}
$ \cp _output/local/bin/linux/amd64/kubeadm /usr/bin

3.9 更新集群证书

$ kubeadm config view > kubeadm-cluster.yaml
# 如果有多个master节点,请将 kubeadm-cluster.yaml 文件和编译后的kubeadm指令发送至其他master节点

# 更新证书(若有多个master,则需要在所有master上执行)
$ kubeadm certs renew all --config=kubeadm-cluster.yaml
W0904 07:23:15.938694   59308 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed
certificate for serving the Kubernetes API renewed
certificate the apiserver uses to access etcd renewed
certificate for the API server to connect to kubelet renewed
certificate embedded in the kubeconfig file for the controller manager to use renewed
certificate for liveness probes to healthcheck etcd renewed
certificate for etcd nodes to communicate with each other renewed
certificate for serving etcd renewed
certificate for the front proxy client renewed
certificate embedded in the kubeconfig file for the scheduler manager to use renewed

3.10 更新kubeconfig文件

$ rm -f /etc/kubernetes/*.conf
$ kubeadm init phase kubeconfig all --config kubeadm-cluster.yaml 
W0904 07:25:41.882636   61426 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file

3.11 重启相关pod

在所有Master上执行重启kube-apiserver、kube-controller、kube-scheduler、etcd这4个容器,以便使证书生效。

$ docker ps |egrep "k8s_kube-apiserver|k8s_kube-controller-manager|k8s_kube-scheduler|k8s_etcd_etcd" | awk '{print $1}' | xargs docker restart

3.12 替换admin文件

$ cp ~/.kube/config{,.old}
$ \cp -i /etc/kubernetes/admin.conf ~/.kube/config
$ chown $(id -u):$(id -g) ~/.kube/config

3.13 确认集群状态正常

3.14 确认证书更新成功

 这里之所以显示99年,是因为我在续签证书时,集群证书已经全部过期不可用了,调整了系统时间,先恢复到证书之前有效期状态,再进行续签操作的。

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐