在开始之前,部署Kubernetes集群机器需要满足以下几个条件:

1、一台或多台机器(≥3奇数个),操作系统CentOS7.x(最好是不低于4.4的内核版本),因为CentOS 7.x 系统自带的3.10.x内核存在一些Bugs,导致运行的Docker,Kubernetes不稳定

2、硬件配置:至少2GB内存或更多RAM,至少2个CPU或更多CPU,至少硬盘30GB或更多

3、集群中所有机器之间网络互通

4、可以访问外网,需要拉取镜像

5、禁止swap分区

环境规划
192.168.1.121      master(4cpu,4G,50G)
192.168.1.112      node1(2cpu,4G,50G)
192.168.1.113      node2(2cpu,4G,50G)

三台服务器分别设置主机名
# hostnamectl set-hostname master
# hostnamectl set-hostname node1
# hostnamectl set-hostname node2

每台机器都修改如下的hosts
# vim /etc/hosts
192.168.1.121      master
192.168.1.112      node1
192.168.1.113      node2

一、初始系统
关闭、禁止自启防火墙,启用iptables
# systemctl stop firewalld
# systemctl disable firewalld
# yum -y install iptables-services iptables && systemctl enable iptables && systemctl start iptables && iptables -F && service iptables save

永久关闭selinux
查看状态:sestatus
# vim /etc/selinux/config
SELINUX=disabled
# setenforce 0

修改最大文件打开数,最大进程数
# vim /etc/security/limits.conf
末尾添加:
*  soft  nofile  204800
*  hard  nofile  204800
*  soft  nproc   204800
*  hard  nproc   204800

# vim /etc/security/limits.d/20-nproc.conf
末尾添加:
*  soft  nproc  204800
*  hard  nproc  204800

注释所有机器/etc/fstab中swap那行
# swapoff -a
# sed -i '/swap/ s/^/#/g' /etc/fstab

设置内核参数
# vim /etc/sysctl.d/kubernetes.conf

##允许通过iptables对网桥的数据包进行过滤、重定向【重要】
net.bridge.bridge-nf-call-iptables=1
##开启网桥模式【重要】
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
##禁止使用swap空间,只有当系统OOM时才允许使用它
vm.swappiness=0
##不检查物理内存是否够用
vm.overcommit_memory=1
##开启OOM
vm.panic_on_oom=0
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
##关闭ipv6【重要】
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720

加载内核配置
# sysctl -p /etc/sysctl.d/kubernetes.conf

升级Linux内核
安装源
# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
安装完成后检查/boot/grub2/grub.cfg中对应内核menuentry中是否包含initrd16配置,如果没有,再安装一次
# yum --enablerepo=elrepo-kernel install -y kernel-lt

设置开机从新内核启动
# grub2-set-default 'CentoS Linux(5.4.114-1.el7.elrepo.x86_64) 7 (Core)'
# reboot

查看内核升级是否成功
# uname -a

kube-proxy开启ipvs的前置条件
# modprobe br_netfilter
# vim /etc/sysconfig/modules/ipvs.modules 
添加如下内容
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack

# chmod 755 /etc/sysconfig/modules/ipvs.modules
# bash /etc/sysconfig/modules/ipvs.modules && lsmod |grep -e ip_vs -e nf_conntrack
注:linux kernel 4.19版本已经将nf_conntrack_ipv4更新为nf_conntrack,而kube-proxy 1.13以下版本强依赖 nf_conntrack_ipv4,解决方式:
1、降级内核到 4.18
2、升级kube-proxy到1.13+(推荐,无需重启机器,影响小)

修改时区并关闭系统不需要的服务
# timedatectl set-timezone Asia/Shanghai
# timedatectl set-local-rtc 0
# systemctl restart rsyslog
# systemctl restart crond
# systemctl stop postfix && systemctl disable postfix

安装依赖
# yum -y install conntrack ntpdate ntp ipvsadm ipset jq yum-utils device-mapper-persistent-data lvm2  libseccomp  git net-tools


二、安装docker
一定要安装19.03.9版docker,否则在初始化k8s master的时候会提示:[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.16. Latest validated version: 19.03.9

下载阿里的源
# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

更新yum缓存
# yum clean all
# yum makecache fast

安装docker-ce
查看可用的版本
# yum list docker-ce --showduplicates | sort -r
# yum install -y docker-ce-19.03.9-3.el7

添加docker镜像加速
# mkdir /etc/docker
# vim /etc/docker/daemon.json 
{
  "registry-mirrors": [
      "http://hub-mirror.c.163.com",
      "https://docker.mirrors.ustc.edu.cn",
      "https://registry.docker-cn.com"
      ]
}

# systemctl daemon-reload
# systemctl enable docker
# systemctl start docker

注意:到这里建议再次看下内核是否又回到旧版本,如果是就重新执行升级内核的命令

修改docker cgroup driver为systemd并重启docker
根据文档CRI installation中的内容,对于使用systemd作为init system的Linux发行版,使用systemd作为docker的cgroup driver可以确保服务器节点在资源紧张的情况更加稳定,因此这里修改各个节点上docker的cgroup driver为systemd(k8s给出的解释, 大致的意思是:systemd是系统自带的cgroup管理器, 系统初始化就存在的, 和cgroups联系紧密,为每一个进程分配cgroups, 用它管理就行了,如果设置成cgroupfs就存在2个cgroup控制管理器, 实验证明在资源有压力的情况下,会存在不稳定的情况)

# vim /etc/docker/daemon.json
注意:以下内容要添加在加速器配置的前面
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts":{
    "max-size": "100m"
  }
}
# systemctl daemon-reload
# systemctl restart docker

查看结果
# docker info | grep Cgroup
Cgroup Driver: systemd

安装k8s组件,kubeadm,kubelet和kubectl
由于版本更新频繁,这里指定版本部署
# vim /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

# yum makecache fast
# yum install -y kubelet-1.20.4 kubeadm-1.20.4 kubectl-1.20.4
# systemctl enable kubelet
注:不需要启动kubelet

==============================以上步骤三台机子都需要执行==================================

主节点导入k8s组件镜像
将k8s-v1.20.4-basic-images.zip上传到master服务器,用下面的脚本批量导入
# unzip /root/k8s-v1.20.4-basic-images.zip
# vim k8simages.sh 
#!/bin/bash
ls /root/images > /tmp/k8simages.list
cd /root/images
for i in $(cat /tmp/k8simages.list)
do
    /usr/bin/docker load -i $i
done
rm -fr /tmp/k8simages.list

或者通过以下脚本拉取镜像
# vim initimage.sh

#!/usr/bin/env bash
K8S_VERSION=v1.20.4
ETCD_VERSION=3.4.13-0
DASHBOARD_VERSION=v2.2.0
DNS_VERSION=1.7.0
PAUSE_VERSION=3.2

docker pull registry.aliyuncs.com/google_containers/kube-apiserver:$K8S_VERSION
docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:$K8S_VERSION
docker pull registry.aliyuncs.com/google_containers/kube-scheduler:$K8S_VERSION
docker pull registry.aliyuncs.com/google_containers/kube-proxy:$K8S_VERSION
docker pull registry.aliyuncs.com/google_containers/etcd:$ETCD_VERSION
docker pull registry.aliyuncs.com/google_containers/pause:$PAUSE_VERSION
docker pull registry.aliyuncs.com/google_containers/coredns:$DNS_VERSION

# chmod +x initimage.sh
执行此脚本,开始导入镜像...
# ./initimage.sh
 

初始化主节点,打印默认的init初始化文件
# kubeadm config print init-defaults > kubeadm-config.yaml
编辑kubeadm-config.yaml,修改成如下内容
# vim kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.1.121        #此处需要修改为master的主机实际ip
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: master
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.20.4            #版本信息修改一致
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16            #添加此配置,用于指定flannel的默认PodNet网段
  serviceSubnet: 10.96.0.0/12
scheduler: {}
---  #以下为新增配置,将默认的调度方式改为ipvs
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
  SupportIPVSProxyMode: true
mode: ipvs

# kubeadm init --config=kubeadm-config.yaml --upload-certs
初始化成功回显:
To start using your cluster, you need to run the following as a regular user:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

注:此回显结果用于从节点加入集群使用,请将实际显示的token和证书保存 
kubeadm join 192.168.1.121:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:50ca5375950abfa05cd4bd37dfb60e9ccd078083aeca49fa8bb6275c13d2a2cd

注:此方法也会拉取镜像,由于之前已经通过脚本将镜像导入了,所以就不会在拉取了


除了上面配置文件的初始化方式,还可以使用下面的命令进行初始化,这两种方法用哪个都可以

# kubeadm init \
--apiserver-advertise-address=192.168.1.121 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.20.4 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16

初始化完成后,要运行回显的结果,目的为保存kubectl与api server交互时的缓存,交互过程为https协议
# mkdir -p $HOME/.kube
# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id -u):$(id -g) $HOME/.kube/config
# export KUBECONFIG=/etc/kubernetes/admin.conf

k8s的各配置文件的位置
环境变量(env)存放的位置:/var/lib/kubelet/kubeadm-flags.env
kubelet配置文件存放位置:/var/lib/kubelet/config.yaml
k8s所有证书的存放位置:/etc/kubernetes/pki
配置文件放在:/etc/Kubernetes

查看node节点(状态显示NotReady是因为还未部署flannel网络)
# kubectl get node 
NAME     STATUS     ROLES                  AGE   VERSION
master   NotReady   control-plane,master   13m   v1.20.4

上面的初始化在master节点执行完毕后,使用下面的命令将node节点加入集群,两个node节点分别执行
# kubeadm join 192.168.1.121:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:50ca5375950abfa05cd4bd37dfb60e9ccd078083aeca49fa8bb6275c13d2a2cd

这时查看node节点
# kubectl get node
NAME     STATUS     ROLES                       AGE   VERSION
master   NotReady   control-plane,master   15m   v1.20.4
node1    NotReady   <none>                         37s   v1.20.4
node2    NotReady   <none>                         14s   v1.20.4

部署flannel网络,所有节点都要部署
master先执行
# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
The connection to the server raw.githubusercontent.com was refused - did you specify the right host or port?
出现上面的报错,打开域名ip查询链接http://ip.tool.chinaz.com/将raw.githubusercontent.com进行ip查询后,在/etc/hosts添加此域名和ip的解析,然后再执行安装flannel命令

# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
回显:
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

等待几分钟,查看所有Pod的状态,确保所有Pod都是Running状态,如果显示下面的结果,则证明启动成功了,如果coredns状态还是pending或者creating的话,要等待一会,如果还是不行可以尝试重启kubelet解决
# kubectl get pods -n kube-system
NAME                                              READY   STATUS    RESTARTS   AGE
coredns-7f89b7bc75-2gqv6                  1/1     Running         0          65m
coredns-7f89b7bc75-l6ml5                   1/1     Running         0          65m
etcd-master                                           1/1     Running         1          65m
kube-apiserver-master                          1/1     Running         1          65m
kube-controller-manager-master           1/1     Running         1          65m
kube-flannel-ds-2vbls                            1/1     Running          0          52m
kube-proxy-jt2nz                                    1/1     Running         2          65m
kube-scheduler-master                           1/1     Running        2          65m

node节点部署flannel
# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
The connection to the server localhost:8080 was refused - did you specify the right host or port?
出现这个问题的原因是kubectl命令需要使用kubernetes-admin来运行,解决方法如下,先添加/etc/hosts
下的域名解析,然后将master节点中的【/etc/kubernetes/admin.conf】文件拷贝到node节点相同目录下,然后配置环境变量:
# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
# source ~/.bash_profile
再安装flannel即可成功

所有节点flannel部署成功后,查看各个节点状态
# kubectl get node
NAME     STATUS   ROLES                      AGE    VERSION
master   Ready    control-plane,master      77m    v1.20.4
node1    Ready    <none>                          12m    v1.20.4
node2    Ready    <none>                          12m    v1.20.4

# kubectl get pods -n kube-system
NAME                             READY   STATUS    RESTARTS   AGE
coredns-7f89b7bc75-2gqv6         1/1     Running   0          81m
coredns-7f89b7bc75-l6ml5         1/1     Running   0          81m
etcd-master                      1/1     Running   1          81m
kube-apiserver-master            1/1     Running   1          81m
kube-controller-manager-master   1/1     Running   1          81m
kube-flannel-ds-2vbls            1/1     Running   0          18m
kube-flannel-ds-55l6q            1/1     Running   0          18m22s
kube-flannel-ds-stzw6            1/1     Running   0          15m22s
kube-proxy-5hcg8                 1/1     Running   0          18m22s
kube-proxy-ftfm6                 1/1     Running   0          15m22s
kube-proxy-jt2nz                 1/1     Running   2          21m
kube-scheduler-master            1/1     Running   2          81m

# kubectl get pod -n kube-system -o wide
NAME                             READY   STATUS    RESTARTS   AGE   IP              NODE     NOMINATED NODE   READINESS GATES
coredns-7f89b7bc75-2gqv6         1/1     Running   0          96m   10.244.0.3      master   <none>           <none>
coredns-7f89b7bc75-l6ml5         1/1     Running   0          96m   10.244.0.2      master   <none>           <none>
etcd-master                      1/1     Running   1          96m   192.168.1.121   master   <none>           <none>
kube-apiserver-master            1/1     Running   1          96m   192.168.1.121   master   <none>           <none>
kube-controller-manager-master   1/1     Running   1          96m   192.168.1.121   master   <none>           <none>
kube-flannel-ds-2vbls            1/1     Running   0          83m   192.168.1.121   master   <none>           <none>
kube-flannel-ds-55l6q            1/1     Running   0          23m   192.168.1.112   node1    <none>           <none>
kube-flannel-ds-stzw6            1/1     Running   0          20m   192.168.1.113   node2    <none>           <none>
kube-proxy-5hcg8                 1/1     Running   0          23m   192.168.1.112   node1    <none>           <none>
kube-proxy-ftfm6                 1/1     Running   0          20m   192.168.1.113   node2    <none>           <none>
kube-proxy-jt2nz                 1/1     Running   2          96m   192.168.1.121   master   <none>           <none>
kube-scheduler-master            1/1     Running   2          96m   192.168.1.121   master   <none>           <none>

查看集群状态
# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                         STATUS      MESSAGE                                                                                       ERROR
scheduler                    Unhealthy   Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused   
controller-manager      Unhealthy   Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused   
etcd-0                              Healthy     {"health":"true"}

出现以上报错,注释掉/etc/kubernetes/manifests下的kube-controller-manager.yaml和kube-scheduler.yaml中的-–port=0
然后再次查看

# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}

=======================================k8s部署完成==================================

卸载k8s集群
# kubeadm reset -f
卸载k8s服务
# modprobe -r ipip
# systemctl stop kubelet kube-proxy flanneld kube-apiserver kube-controller-manager kube-scheduler
# rm -rf ~/.kube/
# rm -rf /etc/kubernetes/
# rm -rf /usr/bin/kube*
# rm -rf /etc/cni
# rm -rf /opt/cni
# rm -rf /var/etcd
# rm -rf /var/lib/etcd
# rm -rf /var/lib/kubelet
# rm -rf /var/run/Kubernetes
# rm -rf /var/run/flannel/
# rm -rf /etc/systemd/system/{etcd,kubelet,kube-apiserver,kube-controller-manager,kube-scheduler,flanneld}.service
# mount | grep '/var/lib/kubelet'| awk '{print $3}'|xargs sudo umount
# rm -rf /root/local/bin/{etcd,kubelet,kube-apiserver,kube-controller-manager,kube-scheduler,flanneld,mk-docker-opts.sh}
# yum clean all
# yum remove -y kubelet-1.15.0 kubeadm-1.15.0 kubectl-1.15.0


k8s dashboard安装

下载yaml文件
# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml

找到下面的内容并编辑修改
# vim recommended.yaml
..........
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort          #添加可以在外部访问的端口
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 31668     #指定端口
  selector:
    k8s-app: kubernetes-dashboard

.........

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin           #默认是kubernetes-dashboard用户,但是这个用户权限不够,修改成cluster-admin
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard  

注意:dashboard一定要安装在master上
# kubectl apply -f recommended.yaml
# kubectl get pod,svc -n kubernetes-dashboard
NAME                                                                             READY   STATUS    RESTARTS   AGE
pod/dashboard-metrics-scraper-79c5968bdc-8cz9f          1/1       Running               1          11m
pod/kubernetes-dashboard-9f9799597-m6jr5                    1/1      Running               1          11m

NAME                                TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
service/dashboard-metrics-scraper   ClusterIP   10.97.176.207   <none>        8000/TCP        11m
service/kubernetes-dashboard        NodePort    10.105.64.244   <none>        443:31668/TCP   11m

浏览器访问https://192.168.1.26:31668

查看token
# kubectl get secret --all-namespaces |grep dashboard
kubernetes-dashboard   default-token-74sdf                              kubernetes.io/service-account-token   3      13m
kubernetes-dashboard   kubernetes-dashboard-certs                       Opaque                                0      13m
kubernetes-dashboard   kubernetes-dashboard-csrf                        Opaque                                1      13m
kubernetes-dashboard   kubernetes-dashboard-key-holder                  Opaque                                2      13m
kubernetes-dashboard   kubernetes-dashboard-token-8dzfp                 kubernetes.io/service-account-token   3      13m

# kubectl describe secret kubernetes-dashboard-token-8dzfp -n kubernetes-dashboard
Name:         kubernetes-dashboard-token-8dzfp
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: kubernetes-dashboard
              kubernetes.io/service-account.uid: adc6957c-2e15-41d2-99d3-42edd1996042

Type:  kubernetes.io/service-account-token

Data
====
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6Iks3VnRhX0FlVVhZTENDSzhqTGtINlZNekZxMG1ZcEtrQjVXbmhNTmJ6SkUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi04ZHpmcCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImFkYzY5NTdjLTJlMTUtNDFkMi05OWQzLTQyZWRkMTk5NjA0MiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.YduXi8hBDYrcyvsjbk5i8virJuA9k6PPtUwsbNzdyACG7PdsZZ9Tp7kQ2JamHpyNFpCuwEd3UNvPwsEqgJmCBmaEq41OXhwAxhdh8Yo41Vg6VSL8qBm2pu1Qj-W7pV-uvG4Bus-2bB15A4D6_dJ4mMCUFoLaQ7vZ_6ozY3aGT8RidsiTbzA75MPV-D67VVgRNZwQB18fTmBEvahMuvBK49PYnqSsWXhGtgFlfJCFCHmz36ycTJLwoSk6EtGJTYGSHGjdd39JrMSE2eg-ag_xQhEz9lI8QOI9Y_AwaGmFBtgFmpXSXMmHUup0mhtq17OQBlD20tvNzhJlw31rXFLPug
ca.crt:     1066 bytes
namespace:  20 bytes

将上面的token值复制到浏览器中就可以对dashboard进行操作了

# kubectl get clusterrole  #查看默认的角色,比如里面有一个cluster-admin就是管理员角色
创建用户

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐