1.准备四台CentOS Linux release 7.9.2009 (Core)(2核2G)系统,都安装好docker

docker的安装

https://docs.docker.com/engine/install/centos/

1.卸载原来安装过的docker,如果没有安装可以不需要卸载
yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine
2.安装yum相关的工具,下载docker-ce.repo文件
[root@cali ~]# 
[root@cali ~]#  yum install -y yum-utils -y
 [root@cali ~]#yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo
下载docker-ce.repo文件存放在/etc/yum.repos.d
[root@cali yum.repos.d]# pwd
/etc/yum.repos.d
[root@cali yum.repos.d]# ls
CentOS-Base.repo  CentOS-Debuginfo.repo  CentOS-Media.repo    CentOS-Vault.repo          docker-ce.repo
CentOS-CR.repo    CentOS-fasttrack.repo  CentOS-Sources.repo  CentOS-x86_64-kernel.repo  nginx.repo
[root@cali yum.repos.d]# 
3.安装docker-ce软件
container engine 容器引擎
docker是一个容器管理的软件
docker-ce 是服务器端软件 server
docker-ce-cli 是客户端软件 client
docker-compose-plugin 是compose插件,用来批量启动很多容器,在单台机器上
containerd.io  底层用来启动容器的
[root@cali yum.repos.d]# yum install docker-ce docker-ce-cli containerd.io docker-compose-plugin -y

[root@cali yum.repos.d]# docker --version
Docker version 20.10.17, build 100c701
[root@cali yum.repos.d]# 
4.启动docker服务
[root@cali yum.repos.d]# systemctl start docker
[root@cali yum.repos.d]# ps aux|grep docker
root       1892  1.4  1.5 1095108 58972 ?       Ssl  11:39   0:00 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
root       2029  0.0  0.0 112824   976 pts/0    S+   11:40   0:00 grep --color=auto docker
[root@cali yum.repos.d]# 
5.设置docker服务开机启动
[root@cali yum.repos.d]# systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@cali yum.repos.d]# 

2.使用kubeadm部署k8s集群

master:192.168.2.78

node1:192.168.2.76

node2:192.168.2.50

node3:192.168.2.48

3.配置 Docker使用systemd作为默认Cgroup驱动

每台服务器上都要操作,master和node上都要操作

cat <<EOF > /etc/docker/daemon.json
{
   "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

#重启docker
systemctl restart docker

4.关闭swap分区

因为k8s不想使用swap分区来存储数据,使用swap会降低性能

每台服务器都需要操作

swapoff -a # 临时关闭
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab #永久关闭

5.重新命名主机,在所有主机上上添加如下命令,修改hosts文件

hostnamectl set-hostname master
hostnamectl set-hostname node1
hostnamectl set-hostname node2
hostnamectl set-hostname node3

修改主机名后使用su - root重新登陆
每台机器上的/etc/hosts文件都需要修改

su  - root

cat >> /etc/hosts << EOF 
192.168.2.78 master
192.168.2.76 node1
192.168.2.50 node2
192.168.2.48 node3
EOF

6.在每台机器安装kubeadm,kubelet和kubectl

集群里的每台服务器都需要安装

kubeadm --》k8s的管理程序–》在master上运行的–》建立整个k8s集群,背后是执行了大量的脚本,帮助我们去启动k8s

kubelet --》在node节点上用来管理容器的–》管理docker,告诉docker程序去启动容器
master和node通信用的–》管理docker,告诉docker程序去启动容器
一个在集群中每个节点(node)上运行的代理。 它保证容器(containers)都 运行在 Pod 中。
kubectl --》在master上用来给node节点发号施令的程序,用来控制node节点的,告诉它们做什么事情的,是命令行操作的工具

添加kubernetes YUM软件源

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
#安装kubeadm,kubelet和kubectl
    yum install -y kubelet kubeadm kubectl

yum install -y kubelet-1.23.6 kubeadm-1.23.6 kubectl-1.23.6  --》最好指定版本,因为1.24的版本默认的容器运行时环境不是docker了
https://www.docker.com/blog/dockershim-not-needed-docker-desktop-with-kubernetes-1-24/

#设置开机自启,因为kubelet是k8s在node节点上的代理,必须开机要运行的
systemctl enable  kubelet

7.部署Kubernetes Master

master主机执行

#提前准备coredns:1.8.4的镜像,后面需要使用,需要在每台机器上下载镜像

[root@master ~]#  docker pull  coredns/coredns:1.8.4
[root@master ~]# docker pull  coredns/coredns
[root@master ~]# docker tag coredns/coredns:1.8.4 registry.aliyuncs.com/google_containers/coredns:v1.8.4

#初始化操作在master服务器上执行

[root@master ~]#kubeadm init \
	--apiserver-advertise-address=192.168.2.78 \
	--image-repository registry.aliyuncs.com/google_containers \
	--service-cidr=10.1.0.0/16 \
	--pod-network-cidr=10.244.0.0/16
	#192.168.0.17 是master的ip
	#      --service-cidr string                  Use alternative range of IP address for service VIPs. (default "10.96.0.0/12")  服务发布暴露--》dnat
	#      --pod-network-cidr string              Specify range of IP addresses for the pod network. If set, the control plane will automatically allocate CIDRs for every node.

执行的效果如下: 有问题
[root@master ~]# kubeadm init \
> --apiserver-advertise-address=192.168.2.130 \
> --image-repository registry.aliyuncs.com/google_containers \
> --service-cidr=10.1.0.0/16 \
> --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.23.5
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
解决的方法:
[root@master ~]# echo 1 >/proc/sys/net/bridge/bridge-nf-call-iptables
[root@master ~]#
如果出现其他错误,尝试:
[root@master ~]# kubeadm reset
执行完成后再执行上面init的命令

解决完之后再次执行上面命令,成功就会显示如下信息

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown ( i d − u ) : (id -u): (idu):(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.2.78:6443 --token e3uvvd.qwysy381vq1a82jy
–discovery-token-ca-cert-hash sha256:4d4dfc4644449d4013586ef7ca477b9285895dbaad39223095e912d1013b6986

按照提示操作

	mkdir -p $HOME/.kube
	sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	sudo chown $(id -u):$(id -g) $HOME/.kube/config

8.node节点服务器加入k8s集群

在所有node节点执行
测试node1节点是否能和master通信

[root@node1 ~]# ping master
PING master (192.168.2.130) 56(84) bytes of data.
64 bytes from master (192.168.2.130): icmp_seq=1 ttl=64 time=1.63 ms
64 bytes from master (192.168.2.130): icmp_seq=2 ttl=64 time=0.701 ms
^C
— master ping statistics —
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.701/1.168/1.636/0.468 ms
[root@node1 ~]#

出错的效果:
[root@node1 ~]# kubeadm join 192.168.2.78:6443 --token 2fiwt1.47ss9cjmyaztw58b \

–discovery-token-ca-cert-hash sha256:653c7264622a6935f9b3ec5509570dc288e52143aeb78b139ca3eddf10f2cdf8
[preflight] Running pre-flight checks
[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileContent–proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
To see the stack trace of this error execute with --v=5 or higher
解决方法:
[root@node1 ~]# swapoff -a
[root@node1 ~]# cat /proc/sys/net/bridge/bridge-nf-call-iptables
0
[root@node1 ~]# echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
[root@node1 ~]#
正常的效果
[root@node1 ~]# kubeadm join 192.168.2.130:6443 --token 2fiwt1.47ss9cjmyaztw58b --discovery-token-ca-cert-hash sha256:653c7264622a6935f9b3ec5509570dc288e52143aeb78b139ca3eddf10f2cdf8
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster…
[preflight] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -o yaml’
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap…

This node has joined the cluster:

  • Certificate signing request was sent to apiserver and a response was received.
  • The Kubelet was informed of the new secure connection details.

Run ‘kubectl get nodes’ on the control-plane to see this node join the cluster.

[root@node1 ~]#

[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady control-plane,master 31m v1.23.5
node1 NotReady 92s v1.23.5
[root@master ~]#

如果加入节点时报其他的错可以尝试在node节点上执行kubeadm reset命令后再重新执行即可解决

查看master节点上的所有的节点服务器
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady control-plane,master 35m v1.23.5
node1 NotReady 5m54s v1.23.5
node2 NotReady 36s v1.23.5
node3 NotReady 29s v1.23.5

NotReady 说明master和node节点之间的通信还是有问题的,容器之间通信还没有准备好

#k8s里删除节点k8s-node1

		kubectl drain node1 --delete-emptydir-data --force --ignore-daemonsets node/node1 
		kubectl delete node node1

9.安装网络插件flannel(在master节点执行)

​ 实现master上的pod和node节点上的pod之间通信

kube-flannel.yaml 文件需要自己去创建,内容如下:

[root@master ~]# vim kube-flannel.yaml 
[root@master feng]# cat kube-flannel2.yml 
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.13.1-rc2
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.13.1-rc2
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg

部署flannel

    [root@master ~]# kubectl apply -f kube-flannel.yml
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel configured
clusterrolebinding.rbac.authorization.k8s.io/flannel configured
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
[root@master ~]# ps aux |grep flannel
root      95202  0.6  1.2 1261176 22428 ?       Ssl  09:54   0:00 /opt/bin/flanneld --ip-masq --kube-subnet-mgr
root      95899  0.0  0.0 112828   988 pts/2    S+   09:56   0:00 grep --color=auto flannel

[root@master ~]# docker ps
CONTAINER ID   IMAGE                                               COMMAND                  CREATED              STATUS              PORTS     NAMES
da90ebbb37b6   dee1cac4dd20                                        "/opt/bin/flanneld -…"   3 seconds ago        Up 3 seconds                  k8s_kube-flannel_kube-flannel-ds-pp6m4_kube-system_eb0e9a91-9bbf-4989-a1e4-8a9f62ace8f7_0
9929599d22a8   registry.aliyuncs.com/google_containers/pause:3.6   "/pause"                 28 seconds ago       Up 28 seconds                 k8s_POD_kube-flannel-ds-pp6m4_kube-system_eb0e9a91-9bbf-4989-a1e4-8a9f62ace8f7_0
67f7f3586495   8b675dda11bb                                        "/opt/bin/flanneld -…"   About a minute ago   Up About a minute             k8s_kube-flannel_kube-flannel-ds-zq8gz_kube-flannel_b263f0bc-ccd2-4a9d-b6e1-9eef9a4c5b0e_0
7ecb7fc0a01b   registry.aliyuncs.com/google_containers/pause:3.6   "/pause"                 9 minutes ago        Up 9 minutes                  k8s_POD_kube-flannel-ds-zq8gz_kube-flannel_b263f0bc-ccd2-4a9d-b6e1-9eef9a4c5b0e_0
e7f18e467666   4c0375452406                                        "/usr/local/bin/kube…"   2 hours ago          Up 2 hours                    k8s_kube-proxy_kube-proxy-gkb8g_kube-system_1e4f0d05-ba83-43e2-af0f-eec2ecab205f_0
d595e1a4a5d8   registry.aliyuncs.com/google_containers/pause:3.6   "/pause"                 2 hours ago          Up 2 hours                    k8s_POD_kube-proxy-gkb8g_kube-system_1e4f0d05-ba83-43e2-af0f-eec2ecab205f_0
8f6be63d4b1c   25f8c7f3da61                                        "etcd --advertise-cl…"   2 hours ago          Up 2 hours                    k8s_etcd_etcd-master_kube-system_a2928a0f3722ba1f5f83180f4c318f1c_2
5af6d1e680e6   595f327f224a                                        "kube-scheduler --au…"   2 hours ago          Up 2 hours                    k8s_kube-scheduler_kube-scheduler-master_kube-system_36cef74ed42d7978355f5334aa6b9ad1_1
b49ebac865a0   8fa62c12256d                                        "kube-apiserver --ad…"   2 hours ago          Up 2 hours                    k8s_kube-apiserver_kube-apiserver-master_kube-system_0701caac0d12394a55489f1ca26bc4c0_1
a98107cffbd2   df7b72818ad2                                        "kube-controller-man…"   2 hours ago          Up 2 hours                    k8s_kube-controller-manager_kube-controller-manager-master_kube-system_15639df1c786ce2bf1a98df7bfaa62d1_1
d312bb2ac6a8   registry.aliyuncs.com/google_containers/pause:3.6   "/pause"                 2 hours ago          Up 2 hours                    k8s_POD_kube-scheduler-master_kube-system_36cef74ed42d7978355f5334aa6b9ad1_0
16303d776710   registry.aliyuncs.com/google_containers/pause:3.6   "/pause"                 2 hours ago          Up 2 hours                    k8s_POD_kube-apiserver-master_kube-system_0701caac0d12394a55489f1ca26bc4c0_0
a11cfa01cbb6   registry.aliyuncs.com/google_containers/pause:3.6   "/pause"                 2 hours ago          Up 2 hours                    k8s_POD_etcd-master_kube-system_a2928a0f3722ba1f5f83180f4c318f1c_0
397b85827d45   registry.aliyuncs.com/google_containers/pause:3.6   "/pause"                 2 hours ago          Up 2 hours                    k8s_POD_kube-controller-manager-master_kube-system_15639df1c786ce2bf1a98df7bfaa62d1_0
# 查看节点状态
[root@master ~]# kubectl get nodes
NAME     STATUS   ROLES                  AGE    VERSION
master   Ready    control-plane,master   147m   v1.23.6
node1    Ready    <none>                 134m   v1.23.6
node2    Ready    <none>                 27m    v1.23.6
node3    Ready    <none>                 27m    v1.23.6

查看各个节点详细信息

[root@master ~]# kubectl get nodes -n kube-system -o wide
NAME     STATUS   ROLES                  AGE    VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
master   Ready    control-plane,master   149m   v1.23.6   192.168.2.78   <none>        CentOS Linux 7 (Core)   3.10.0-1160.71.1.el7.x86_64   docker://20.10.21
node1    Ready    <none>                 137m   v1.23.6   192.168.2.76   <none>        CentOS Linux 7 (Core)   3.10.0-1160.71.1.el7.x86_64   docker://20.10.21
node2    Ready    <none>                 30m    v1.23.6   192.168.2.50   <none>        CentOS Linux 7 (Core)   3.10.0-1160.71.1.el7.x86_64   docker://20.10.21
node3    Ready    <none>                 30m    v1.23.6   192.168.2.48   <none>        CentOS Linux 7 (Core)   3.10.0-1160.71.1.el7.x86_64   docker://20.10.21

查看k8s里面的pod

[root@master ~]# kubectl get pod -n kube-system 
NAME                             READY   STATUS    RESTARTS   AGE
coredns-6d8c4cb4d-92vrq          1/1     Running   0          150m
coredns-6d8c4cb4d-qd4fh          1/1     Running   0          150m
etcd-master                      1/1     Running   2          151m
kube-apiserver-master            1/1     Running   1          151m
kube-controller-manager-master   1/1     Running   1          151m
kube-flannel-ds-fsxk8            1/1     Running   0          5m6s
kube-flannel-ds-pp6m4            1/1     Running   0          5m6s
kube-flannel-ds-sctzg            1/1     Running   0          5m6s
kube-flannel-ds-wd864            1/1     Running   0          5m6s
kube-proxy-28q88                 1/1     Running   1          31m
kube-proxy-f8plc                 1/1     Running   0          138m
kube-proxy-gkb8g                 1/1     Running   0          150m
kube-proxy-pcpw4                 1/1     Running   1          31m
kube-scheduler-master            1/1     Running   1          151m

查看k8s里面的命名空间有哪些——》k8s自己创建的

[root@master ~]# kubectl get ns
NAME              STATUS   AGE
default           Active   151m
kube-flannel      Active   14m
kube-node-lease   Active   151m
kube-public       Active   151m
kube-system       Active   151m
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐