集群环境

主机名称IP备注操作系统
master192.168.0.8docker、kubectl、kubelet、kubeadm、flannelcentos7.3
node01192.168.0.9docker、kubectl、kubelet、kubeadmcentos7.3
node02192.168.0.10docker、kubectl、kubelet、kubeadmcentos7.3

软件版本

kubernetes:1.11.2

docker-ce:18.06.1-ce

flennal:master

 一、环境初始化

1、分别在各节点设置主机名称


  
  
  1. hostnamectl set-hostname master
  2. hostnamectl set-hostname node 01
  3. hostnamectl set-hostname node 02

2、配置主机映射(各节点都需要)


  
  
  1. cat < <EOF > /etc /hosts
  2. 127. 0.0.1 localhost localhost.localdomain localhost 4 localhost 4.localdomain 4
  3. :: 1 localhost localhost.localdomain localhost 6 localhost 6.localdomain 6
  4. 192. 168.0.8 master
  5. 192. 168.0.9 node 01
  6. 192. 168.0.10 node 02
  7. EOF

3、关闭防火墙

systemctl stop firewalld &&  systemctl disable firewalld
  
  

4、关闭Selinux


  
  
  1. setenforce 0 #临时禁用selinux
  2. sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc /sysconfig /selinux #永久关闭 修改 /etc /sysconfig /selinux文件设置
  3. sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc /selinux /config

5、关闭Swap,否则kubelet会出错!
 


  
  
  1. swapoff -a #临时关闭swap
  2. sed -i 's/.*swap.*/#&/' /etc /fstab #永久关闭 注释 /etc /fstab文件里swap相关的行

6、配置路由
 


  
  
  1. cat < <EOF > /etc /sysctl.d /k 8s.conf
  2. net.bridge.bridge-nf-call-ip 6tables = 1
  3. net.bridge.bridge-nf-call-iptables = 1
  4. EOF

使其立刻生效

sysctl --system

或执行sysctl -p /etc/sysctl.d/k8s.conf生效

7、安装依赖包配置ntp


  
  
  1. yum install -y epel-release
  2. yum install -y yum-utils device-mapper-persistent-data lvm 2 net-tools conntrack-tools wget vim ntpdate libseccomp libtool-ltdl
  3. systemctl enable ntpdate.service
  4. echo '*/30 * * * * /usr/sbin/ntpdate time7.aliyun.com >/dev/null 2>&1' > /tmp /crontab 2.tmp
  5. crontab /tmp /crontab 2.tmp
  6. systemctl start ntpdate.service

8、添加kubernetes的yum源


  
  
  1. cat < <EOF > /etc /yum.repos.d /kubernetes.repo
  2. [kubernetes]
  3. name =Kubernetes
  4. baseurl =https: / /mirrors.aliyun.com /kubernetes /yum /repos /kubernetes-el 7-x 86_ 64 /
  5. enabled = 1
  6. gpgcheck = 1
  7. repo_gpgcheck = 1
  8. gpgkey =https: / /mirrors.aliyun.com /kubernetes /yum /doc /yum-key.gpg https: / /mirrors.aliyun.com /kubernetes /yum /doc /rpm-package-key.gpg
  9. EOF

9、设置内核(可不设置)
echo "* soft nofile 65536" >> /etc/security/limits.conf
echo "* hard nofile 65536" >> /etc/security/limits.conf
echo "* soft nproc 65536"  >> /etc/security/limits.conf
echo "* hard nproc 65536"  >> /etc/security/limits.conf
echo "* soft  memlock  unlimited"  >> /etc/security/limits.conf
echo "* hard memlock  unlimited"  >> /etc/security/limits.conf

自己写的一个初始化脚本config.sh ,可以提高初始化效率

二、安装与配置docker

1、安装docker
参照《Centos7安装Docker最新版》

2、配置docker镜像下载代理
vi /usr/lib/systemd/system/docker.service的ExecStart前加入一行


  
  
  1. Environment = "HTTPS_PROXY=http://ik8s.io:10080"
  2. Environment = "NO_PROXY=127.0.0.0/8,172.20.0.0/16"

3、重启docker

systemctl daemon-reload && systemctl restart docker

三、安装与配置kubeadm, kubelet和kubectl

1、安装kubeadm, kubelet和kubectl

yum install -y kubelet kubeadm kubectl
  
  

2、配置kubeadm

vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf修改如下

  
  
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/ --cni-bin-dir=/opt/cni/bin"
  
  

systemctl enable kubelet && systemctl start kubelet

4: 命令补全


  
  
  1. yum install -y bash-completion
  2. source /usr /share /bash-completion /bash_completion
  3. source <(kubectl completion bash)
  4. echo "source <(kubectl completion bash)" >> ~/.bashrc

四、使用kubeadm初始化master

初始化的时候指定一下kubernetes版本,并设置一下pod-network-cidr(后面的flannel会用到):
$ kubeadm init --kubernetes-version=v1.11.2 --pod-network-cidr=10.244.0.0/16


  
  
  1. [root@master]# kubeadm init --kubernetes-version =v 1.11.2 --pod-network-cidr = 10.244.0.0 / 16
  2. [init] using Kubernetes version: v 1.11.2
  3. [preflight] running pre-flight checks
  4. I0825 11: 41: 52.394205 5611 kernel_validator. go: 81] Validating kernel version
  5. I0825 11: 41: 52.394466 5611 kernel_validator. go: 96] Validating kernel config
  6. [preflight /images] Pulling images required for setting up a Kubernetes cluster
  7. [preflight /images] This might take a minute or two, depending on the speed of your internet connection
  8. [preflight /images] You can also perform this action in beforehand using 'kubeadm config images pull'
  9. [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  10. [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  11. [preflight] Activating the kubelet service
  12. [certificates] Generated ca certificate and key.
  13. [certificates] Generated apiserver certificate and key.
  14. [certificates] apiserver serving cert is signed for DNS names [master kubernetes kubernetes. default kubernetes. default.svc kubernetes. default.svc.cluster.local] and IPs [ 10.96.0.1 192.168.0.8]
  15. [certificates] Generated apiserver-kubelet-client certificate and key.
  16. [certificates] Generated sa key and public key.
  17. [certificates] Generated front-proxy-ca certificate and key.
  18. [certificates] Generated front-proxy-client certificate and key.
  19. [certificates] Generated etcd /ca certificate and key.
  20. [certificates] Generated etcd /server certificate and key.
  21. [certificates] etcd /server serving cert is signed for DNS names [master localhost] and IPs [ 127.0.0.1 :: 1]
  22. [certificates] Generated etcd /peer certificate and key.
  23. [certificates] etcd /peer serving cert is signed for DNS names [master localhost] and IPs [ 192.168.0.8 127.0.0.1 :: 1]
  24. [certificates] Generated etcd /healthcheck-client certificate and key.
  25. [certificates] Generated apiserver-etcd-client certificate and key.
  26. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
  27. [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
  28. [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
  29. [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
  30. [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
  31. [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
  32. [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
  33. [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
  34. [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
  35. [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
  36. [init] this might take a minute or longer if the control plane images have to be pulled
  37. [apiclient] All control plane components are healthy after 49.502361 seconds
  38. [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
  39. [kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster
  40. [markmaster] Marking the node master as master by adding the label "node-role.kubernetes.io/master=''"
  41. [markmaster] Marking the node master as master by adding the taints [node-role.kubernetes.io /master:NoSchedule]
  42. [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master" as an annotation
  43. [bootstraptoken] using token: 3resfo.cam 2tnjxw 0tastur
  44. [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
  45. [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
  46. [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
  47. [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
  48. [addons] Applied essential addon: CoreDNS
  49. [addons] Applied essential addon: kube-proxy
  50. Your Kubernetes master has initialized successfully!
  51. To start using your cluster, you need to run the following as a regular user:
  52. mkdir -p $HOME /.kube
  53. sudo cp -i /etc /kubernetes /admin.conf $HOME /.kube /config
  54. sudo chown $(id -u):$(id -g) $HOME /.kube /config
  55. You should now deploy a pod network to the cluster.
  56. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  57. https: / /kubernetes.io /docs /concepts /cluster-administration /addons /
  58. You can now join any number of machines by running the following on each node
  59. as root:
  60. kubeadm join 192.168.0.8: 6443 --token 3resfo.cam 2tnjxw 0tastur --discovery-token-ca-cert-hash sha 256: 4a 4f 45a 3c 7344ddfe 02af 363be 293b 21237caaf 2b 1598c 31d 6e 662a 18bb 76 fd 9

设置config

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

未安装flannel如下状态

安装flannel,中间的版本号换为master即为最新版。
 kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml

 安装完network之后,你可以通过kubectl get pods --all-namespaces来查看kube-dns是否在running来判断network是否安装成功。安装成功既如下

测试master健康


五、将node加入集群

1、配置kubelet

从master将kubelet文件分别复制到node01、node02


  
  
  1. scp /etc /sysconfig /kubelet node 01: /etc /sysconfig /kubelet
  2. scp /etc /sysconfig /kubelet node 02: /etc /sysconfig /kubelet

2、执行 kubeadm join的命令即可:

kubeadm join 192.168.0.8:6443 --token 3resfo.cam2tnjxw0tastur --discovery-token-ca-cert-hash sha256:4a4f45a3c7344ddfe02af363be293b21237caaf2b1598c31d6e662a18bb76fd9
  
  

如果忘记,可以使用以下命令获取

kubeadm token create --print-join-command
  
  

 


六、测试

kubectl get nodes
  
  


集群部署成功

七、初始化集群报错及问题解决:

问题一:
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
unable to fetch release information. URL: "https://storage.googleapis.com/kubernetes-release/release/stable-1.7.5.txt" Status: 404 Not Found
#解决:
添加版本信息“--kubernetes-version=v1.7.5”,kubeadm reset,再次执行init
问题二:
W1205 18:49:21.323220  106548 cni.go:189] Unable to update cni config: No networks found in /etc/cni/net.d
Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
#解决:
修改文件内容: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/ --cni-bin-dir=/opt/cni/bin"
问题三:
k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.0.8:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-master&resourceVersion=0: dial tcp 192.168.0.8:6443: getsockopt: connection refused
k8s.io/kubernetes/pkg/kubelet/kubelet.go:400: Failed to list *v1.Service: Get https://192.168.0.8:6443/api/v1/services?resourceVersion=0: dial tcp 192.168.0.8:6443: getsockopt: connection refused
k8s.io/kubernetes/pkg/kubelet/kubelet.go:408: Failed to list *v1.Node: Get https://192.168.0.8:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-master&resourceVersion=0: dial tcp 192.168.0.8:6443: getsockopt: connection refused
Unable to write event: 'Post https://192.168.0.8:6443/api/v1/namespaces/kube-system/events: dial tcp 192.168.0.8:6443: getsockopt: connection refused' (may retry after sleeping)
Failed to get status for pod "etcd-k8s-master_kube-system(5802ae0664772d031dee332b3c63498e)": Get https://192.168.0.8:6443/api/v1/namespaces/kube-system/pods/etcd-k8s-master: dial tcp 192.168.0.8:6443: getsockopt: connection refused
#解决:
打开防火墙:
systemctl start firewalld
添加火墙规则:
firewall-cmd --zone=public --add-port=80/tcp --permanent
firewall-cmd --zone=public --add-port=6443/tcp --permanent
firewall-cmd --zone=public --add-port=2379-2380/tcp --permanent
firewall-cmd --zone=public --add-port=10250-10255/tcp --permanent
firewall-cmd --zone=public --add-port=30000-32767/tcp --permanent
firewall-cmd --reload
firewall-cmd --zone=public --list-ports
问题四:
[root@master]# kubectl get node
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")
#解决:
[root@master]# mv  $HOME/.kube $HOME/.kube.bak
[root@mster]# mkdir -p $HOME/.kube
[root@master]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master]# chown $(id -u):$(id -g) $HOME/.kube/config
八、安装kubernetes-dashboard

1、下载kubernetes-dashboard.yaml

wget https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
  
  

2、编辑kubernetes-dashboard.yaml

添加type: Nodeport 和nodePort: 30001,将146行的serviceAccountName: kubernetes-dashboard改为serviceAccountName: kubernetes-dashboard-admin

kubernetes-dashboard.yaml内容如下:


  
  
  1. # Copyright 2017 The Kubernetes Authors.
  2. #
  3. # Licensed under the Apache License, Version 2.0 (the "License");
  4. # you may not use this file except in compliance with the License.
  5. # You may obtain a copy of the License at
  6. #
  7. # http: / /www.apache.org /licenses /LICENSE- 2.0
  8. #
  9. # Unless required by applicable law or agreed to in writing, software
  10. # distributed under the License is distributed on an "AS IS" BASIS,
  11. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  12. # See the License for the specific language governing permissions and
  13. # limitations under the License.
  14. # Configuration to deploy release version of the Dashboard UI compatible with
  15. # Kubernetes 1.8.
  16. #
  17. # Example usage: kubectl create -f <this_ file >
  18. # ------------------- Dashboard Secret ------------------- #
  19. apiVersion: v 1
  20. kind: Secret
  21. metadata:
  22. labels:
  23. k8s-app: kubernetes-dashboard
  24. name: kubernetes-dashboard-certs
  25. name space: kube-system
  26. type: Opaque
  27. ---
  28. # ------------------- Dashboard Service Account ------------------- #
  29. apiVersion: v 1
  30. kind: ServiceAccount
  31. metadata:
  32. labels:
  33. k8s-app: kubernetes-dashboard
  34. name: kubernetes-dashboard
  35. name space: kube-system
  36. ---
  37. # ------------------- Dashboard Role & Role Binding ------------------- #
  38. kind: Role
  39. apiVersion: rbac.authorization.k 8s.io /v 1
  40. metadata:
  41. name: kubernetes-dashboard-minimal
  42. name space: kube-system
  43. rules:
  44. # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
  45. - apiGroups: [ ""]
  46. resources: [ "secrets"]
  47. verbs: [ "create"]
  48. # Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
  49. - apiGroups: [ ""]
  50. resources: [ "configmaps"]
  51. verbs: [ "create"]
  52. # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  53. - apiGroups: [ ""]
  54. resources: [ "secrets"]
  55. resourceNames: [ "kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
  56. verbs: [ "get", "update", "delete"]
  57. # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  58. - apiGroups: [ ""]
  59. resources: [ "configmaps"]
  60. resourceNames: [ "kubernetes-dashboard-settings"]
  61. verbs: [ "get", "update"]
  62. # Allow Dashboard to get metrics from heapster.
  63. - apiGroups: [ ""]
  64. resources: [ "services"]
  65. resourceNames: [ "heapster"]
  66. verbs: [ "proxy"]
  67. - apiGroups: [ ""]
  68. resources: [ "services/proxy"]
  69. resourceNames: [ "heapster", "http:heapster:", "https:heapster:"]
  70. verbs: [ "get"]
  71. ---
  72. apiVersion: rbac.authorization.k 8s.io /v 1
  73. kind: RoleBinding
  74. metadata:
  75. name: kubernetes-dashboard-minimal
  76. name space: kube-system
  77. roleRef:
  78. apiGroup: rbac.authorization.k 8s.io
  79. kind: Role
  80. name: kubernetes-dashboard-minimal
  81. subjects:
  82. - kind: ServiceAccount
  83. name: kubernetes-dashboard
  84. name space: kube-system
  85. ---
  86. # ------------------- Dashboard Deployment ------------------- #
  87. kind: Deployment
  88. apiVersion: apps /v 1beta 2
  89. metadata:
  90. labels:
  91. k8s-app: kubernetes-dashboard
  92. name: kubernetes-dashboard
  93. name space: kube-system
  94. spec:
  95. replicas: 1
  96. revisionHistoryLimit: 10
  97. selector:
  98. matchLabels:
  99. k 8s-app: kubernetes-dashboard
  100. template:
  101. metadata:
  102. labels:
  103. k 8s-app: kubernetes-dashboard
  104. sp ec:
  105. containers:
  106. - name: kubernetes-dashboard
  107. image: k 8s.gcr.io /kubernetes-dashboard-amd 64:v 1.8.3
  108. ports:
  109. - containerPort: 8443
  110. protocol: TCP
  111. args:
  112. - --auto-generate-certificates
  113. # Uncomment the following line to manually specify Kubernetes API server Host
  114. # If not specified, Dashboard will attempt to auto discover the API server and connect
  115. # to it. Uncomment only if the default does not work.
  116. # - --apiserver-host =http: / /my-address:port
  117. volumeMounts:
  118. - name: kubernetes-dashboard-certs
  119. mountPath: /certs
  120. # Create on-disk volume to store exec logs
  121. - mountPath: /tmp
  122. name: tmp-volume
  123. livenessProbe:
  124. httpGet:
  125. scheme: HTTPS
  126. path: /
  127. port: 8443
  128. initialDelaySeconds: 30
  129. timeoutSeconds: 30
  130. volumes:
  131. - name: kubernetes-dashboard-certs
  132. secret:
  133. secretName: kubernetes-dashboard-certs
  134. - name: tmp-volume
  135. emptyDir: {}
  136. serviceAccountName: kubernetes-dashboard-admin
  137. #不改的话有坑
  138. # Comment the following tolerations if Dashboard must not be deployed on master
  139. tolerations:
  140. - key: node-role.kubernetes.io /master
  141. effect: NoSchedule
  142. ---
  143. # ------------------- Dashboard Service ------------------- #
  144. kind: Service
  145. apiVersion: v 1
  146. metadata:
  147. labels:
  148. k8s-app: kubernetes-dashboard
  149. name: kubernetes-dashboard
  150. name space: kube-system
  151. spec:
  152. type: NodePort
  153. ports:
  154. - port: 443
  155. targetPort: 8443
  156. nodePort: 30001
  157. selector:
  158. k8s-app: kubernetes-dashboard

3、安装dashboard

kubectl apply -f kubernetes-dashboard.yaml
  
  

如果不授予权限就会报错。

4、授予dashboard账户集群管理权限,新建vi kubernetes-dashboard-admin.rbac.yaml


  
  
  1. apiVersion: v 1
  2. kind: ServiceAccount
  3. metadata:
  4. labels:
  5. k8s-app: kubernetes-dashboard
  6. name: kubernetes-dashboard-admin
  7. name space: kube-system
  8. ---
  9. apiVersion: rbac.authorization.k 8s.io /v 1beta 1
  10. kind: ClusterRoleBinding
  11. metadata:
  12. name: kubernetes-dashboard-admin
  13. labels:
  14. k8s-app: kubernetes-dashboard
  15. roleRef:
  16. apiGroup: rbac.authorization.k 8s.io
  17. kind: ClusterRole
  18. name: cluster-admin
  19. subjects:
  20. - kind: ServiceAccount
  21. name: kubernetes-dashboard-admin
  22. name space: kube-system

授予权限

kubectl apply -f  kubernetes-dashboard-admin.rbac.yaml
  
  

5、访问dashboard

https://192.168.0.10:30001

6、获取token令牌的方式访问

获取token


  
  
  1. [root@master ~]# kubectl -n kube-system get secret | grep kubernetes-dashboard-admin|awk '{print "secret/"$1}'|xargs kubectl describe -n kube-system|grep token:|awk -F : '{print $2}'|xargs echo
  2. eyJhbGciOiJSUzI 1NiIsImtpZCI 6IiJ 9.eyJpc 3MiOiJrdWJlcm 5ldGVzL 3NlcnZpY 2VhY 2NvdW 50Iiwia 3ViZXJuZXRlcy 5pby 9zZXJ 2aWNlYWNjb 3VudC 9uYW 1lc 3BhY 2UiOiJrdWJlLXN 5c 3RlbSIsImt 1YmVybmV 0ZXMuaW 8vc 2VydmljZWFjY 291bnQvc 2VjcmV 0Lm 5hbWUiOiJrdWJlcm 5ldGVzLWRhc 2hib 2FyZC 1hZG 1pbi 10b 2tlbi 1qYnRrZyIsImt 1YmVybmV 0ZXMuaW 8vc 2VydmljZWFjY 291bnQvc 2VydmljZS 1hY 2NvdW 50Lm 5hbWUiOiJrdWJlcm 5ldGVzLWRhc 2hib 2FyZC 1hZG 1pbiIsImt 1YmVybmV 0ZXMuaW 8vc 2VydmljZWFjY 291bnQvc 2VydmljZS 1hY 2NvdW 50LnVpZCI 6ImYzZTY 2NjBhLWE 4NTgtMTFlOC 1iNTI 2LTAwMGMyOWU 2ZTA 4MiIsInN 1YiI 6InN 5c 3RlbTpzZXJ 2aWNlYWNjb 3VudDprdWJlLXN 5c 3RlbTprdWJlcm 5ldGVzLWRhc 2hib 2FyZC 1hZG 1pbiJ 9.CcgvvsCEkwKi 0nhq-cnm-rDmLiBSclnK 3H 3cTugUpawvS 2ruBl 05jVpwPyh 3pNc 4Z_V 5GPelTa 7tsVJHDQ 2uG 1P 7HYqKkcvtFnua 9y 5DAFMqtOf-sxiHSDjIkphXDKCxRVaGXQzv 9bTC-MAT 0NnJzK 08w 8lZlITWDuT_GQQHcczCOVknFnwVFDEzQKR 0DLc 9Bx 2Gw- 5TINidjhVHIWmUMhfEZE 5F 1D_kvBHRS 6bgE 43h 0OsoPqs 3BeCzxRTCbdbeDb 9wIVcBxoi 9QF 9pE 5k 5dyuNOylRP 2SLiHrK 8nuCZSESZkRSDkC_ 3M 2ax_ 2yfnBGi 1cwH 1A 4JAgcMr 7iIIBKAg

将令牌复制登录即可

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐