00-Kubeadm安装K8s
Kubeadm安装K8s基础环境ip/主机名配置安装软件192.168.10.101/k8s-master2G/2C三台基础docker环境192.168.10.102/k8s-node012G/1C三台基础docker环境192.168.10.103/k8s-node022G/1C三台基础docker环境一:更改控制组@1[root@centos-01 ~]# cat /etc/docker/d
Kubeadm安装K8s
基础环境
ip/主机名 | 配置 | 安装软件 |
---|---|---|
192.168.10.101/k8s-master | 2G/2C | 三台基础docker环境 |
192.168.10.102/k8s-node01 | 2G/1C | 三台基础docker环境 |
192.168.10.103/k8s-node02 | 2G/1C | 三台基础docker环境 |
一:更改控制组
@1
[root@centos-01 ~]# cat /etc/docker/daemon.json
{
"registry-mirrors": ["https://5aliqknw.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
#使用sed进行更改
[root@centos-01 ~]# sed -i '/registry-mirrors/s/,/ /' /etc/docker/daemon.json
[root@centos-01 ~]# sed -i '3d' /etc/docker/daemon.json
#发送到其他在主机
[root@centos-01 ~]# scp /etc/docker/daemon.json 192.168.10.102:/etc/docker/daemon.json
[root@centos-01 ~]# scp /etc/docker/daemon.json 192.168.10.103:/etc/docker/daemon.json
#三台重启docker
[root@centos-01 ~]# systemctl restart docker
[root@centos-02 ~]# systemctl restart docker
[root@centos-03 ~]# systemctl restart docker
三台重启后确定控制组为cgroupps
[root@centos-01 ~]# docker info | grep Cgroup
Cgroup Driver: cgroupfs
二:更改主机名(三台)
[root@centos-01 ~]# hostnamectl set-hostname k8s-master
[root@centos-02 ~]# hostnamectl set-hostname k8s-node01
[root@centos-03 ~]# hostnamectl set-hostname k8s-node02
bash
更改完成后执行bash进入子shell刷新出主机名
添加hosts文件@1
[root@k8s-master ~]# cat >> /etc/hosts <<EOF
192.168.10.101 k8s-master
192.168.10.102 k8s-node01
192.168.10.103 k8s-node02
EOF
#发送给其他主机
[root@k8s-master ~]# scp /etc/hosts 192.168.10.102:/etc/hosts
[root@k8s-master ~]# scp /etc/hosts 192.168.10.103:/etc/hosts
关闭防火墙selinux @1/2/3
[root@k8s-master ~]# systemctl stop firewalld && systemctl disable firewalld
[root@k8s-master ~]# setenforce 0
检查配置文件是否正确@1/2/3
[root@k8s-master ~]# sysctl -p
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
#输出的结构有这三行表示正确
关闭交换分区@1/2/3
[root@k8s-master ~]# swapoff -a
[root@k8s-master ~]# free -m
total used free shared buff/cache available
Mem: 1823 162 767 16 893 1447
Swap: 0 0 0
[root@k8s-master ~]# sed -i '/swap/s/^/#/' /etc/fstab
所有节点配置并安装 kubeadm、kubelet、kubectl
下载阿里源,里面包含安装依赖包**(三台)**
[root@k8s-master ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 2523 100 2523 0 0 7865 0 --:--:-- --:--:-- --:--:-- 7884
[root@k8s-master ~]# yum repolist
方法一:使用网络源(三台)
首先,我们要准备相关的yum源文件
# 阿里云 yum 源
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
# 阿里云的 k8s 源
cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernete s/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
EOF
安装 Kubelet、Kubeadm 和 Kubectl
#搜索 kubeadm 版本
yum list kubeadm --showduplicates | sort - r
# 选择 1.18.0 版本
yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0 67
# Kubelet 设置开机启动
systemctl enable kubelet 9 # kubelet
刚安装完成后,通过 systemctl start kubelet 方式是无法启动的,需要加入节点或初始化为 master 后才可启动成功。
方法一:使用本地源(三台)
拖入离线包到三台 -----kubeadm_rpm.1.18.tgz
链接:https://pan.baidu.com/s/17LwlIXSTrCH_5ZOK4x6liw
提取码:feng
复制这段内容后打开百度网盘手机App,操作更方便哦
[root@k8s-master ~]# ls kubeadm_rpm.1.18.tgz
kubeadm_rpm.1.18.tgz
[root@k8s-master ~]# tar zxvf kubeadm_rpm.1.18.tgz
[root@k8s-master ~]# cd kubeadm_ctl/
[root@k8s-master kubeadm_ctl]# yum -y install ./*.rpm
# Kubelet 设置开机启动
systemctl enable kubelet
# kubelet 刚安装完成后,通过 systemctl start kubelet 方式是无法启动的,需要加入节点或初始化为 master 后才可启动成功
master节点 生成初始化配置文件
[root@k8s-master ~]# cd
[root@k8s-master ~]# kubeadm config print init-defaults > init-config.yaml
#输出信息为警告,对环境没有关系
W0222 14:32:05.078013 14834 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
修改初始化配置文件
#更改为init-conf.yaml里的12行的ip为master的ip
[root@k8s-master ~]# sed -i 's/1.2.3.4/192.168.10.101/' init-config.yaml
#修改为国内地址
[root@k8s-master ~]# sed -i 's/imageRepository:\ k8s.gcr.io/imageRepository: registry.aliyuncs.com\/google_containers/' init-config.yaml
# 因为我们这里选择 flannel 作为 Pod 的网络插件,所 以需要新增加 Pod 网段 10.244.0.0/16
[root@k8s-master ~]# sed -i '37a\ \ podSubnet: 10.244.0.0\/16' init-config.yaml
方法一:互联网下载
拉取所需镜像
#查看init-config.yaml里面需要的的镜像
[root@k8s-master ~]# kubeadm config images list --config init-config.yaml
#下载init-config.yaml里面需要的的镜像
[root@k8s-master ~]# kubeadm config images pull --config init-config.yaml
方法二:本地文件kubeadm.1.18.img.tgz进行导入
kubeadm.1.18.img.tgz文件和上面网盘的链接在一起
解压文件:
[root@k8s-master ~]# tar zxvf kubeadm.1.18.img.tgz
kubeadm.1.18.img/
kubeadm.1.18.img/quay.io-coreos-flannel--v0.13.0
kubeadm.1.18.img/kube-flannel.yml
kubeadm.1.18.img/recommended.yaml
kubeadm.1.18.img/kube-proxy.v1.18.0
kubeadm.1.18.img/kube-scheduler.v1.18.0
kubeadm.1.18.img/kube-apiserver.v1.18.0
kubeadm.1.18.img/kube-controller-manager.v1.18.0
kubeadm.1.18.img/pause.3.2
kubeadm.1.18.img/coredns.1.6.7
kubeadm.1.18.img/etcd.3.4.3
#==========================master进行镜像导入
[root@k8s-master ~]# ls /root/kubeadm.1.18.img | grep -v "l$" > /root/A.txt
[root@k8s-master ~]# vim load.sh
#!/bin/bash
for i in $(cat /root/A.txt)
do
docker load < /root/kubeadm.1.18.img/$i
done
[root@k8s-master ~]# chmod +x load.sh
#执行进行导入
[root@k8s-master ~]# ./load.sh
#导入完成进行查看,可以看到很多没有标签的,
#可以使用kubeadm config images pull --config init-config.yaml进行更新标签
[root@k8s-master ~]# docker images
REPOSITORY TAG IMAGE IDCREATED SIZE
quay.io/coreos/flannel v0.13.0 e708f4bb69e34 months ago 57.2MB
registry.aliyuncs.com/google_containers/kube-proxy v1.18.0 43940c34f24f11 months ago 117MB
<none> <none> 74060cea7f7011 months ago 173MB
<none> <none> d3e55153f52f11 months ago 162MB
<none> <none> a31f78c7c8ce11 months ago 95.3MB
<none> <none> 80d28bedfe5d12 months ago 683kB
<none> <none> 67da37a9a36013 months ago 43.8MB
<none> <none> 303ce5db0e9016 months ago 288MB
#再次查看镜像标签已经更新了
[root@k8s-master ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
quay.io/coreos/flannel v0.13.0 e708f4bb69e3 4 months ago 57.2MB
registry.aliyuncs.com/google_containers/kube-proxy v1.18.0 43940c34f24f 11 months ago 117MB
registry.aliyuncs.com/google_containers/kube-apiserver v1.18.0 74060cea7f70 11 months ago 173MB
registry.aliyuncs.com/google_containers/kube-controller-manager v1.18.0 d3e55153f52f 11 months ago 162MB
registry.aliyuncs.com/google_containers/kube-scheduler v1.18.0 a31f78c7c8ce 11 months ago 95.3MB
registry.aliyuncs.com/google_containers/pause 3.2 80d28bedfe5d 12 months ago 683kB
registry.aliyuncs.com/google_containers/coredns 1.6.7 67da37a9a360 13 months ago 43.8MB
registry.aliyuncs.com/google_containers/etcd 3.4.3-0 303ce5db0e90 16 months ago 288MB
配置 kubelet如果控制组为Cgroup不需要此步骤 )
安装完成后,我们还需要对kubelet进行配置,因为用yum源的方式安装的kubelet生成的配置默认–cgroup-driver为 cgroupfs,容器运行时和 kubelet 使用 systemd 作为cgroup 驱动,以此使系统更为稳定。我们可以通过docker info命令查看:
[root@k8s-master ~]# docker info |grep Cgroup
Cgroup Driver: systemd
如果希望使用 cgroupfs 驱动,则不需要以下操作修改文件 kubelet 的配置文件
[root@k8s-master ~]#cat /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
# 修改以下内容
#第三行
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap- kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --cgroup-driver=systemd"
#重新加载配置文件
systemctl daemon-reload
# 查看是否生效
systemctl show --property=Environment kubelet | cat
# 另外,还需要确定 systemd 的子系统是否启动,否则 kubelet 会启动失败
systemctl status kubepods.slice kubepods-besteffort.slice kubepods-burstable.slice
# 只需启动 kubepods.slice 即可
systemctl start kubepods.slice
# 后续安装网络插件如果node节点的pod一直是pending状 态,应该也是这个问题。 # 其他节点也需要执行以上命令
集群初始化安装(@1)
在master节点上用kubeadm命令来初始化我们的集群了:
[root@k8s-master ~]# kubeadm init --config init-config.yaml
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
#--------------------------此处时需要在master上执行的操作
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
#------------------------其他节点加入集群的密钥
kubeadm join 192.168.10.101:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:5046e19d7e6ae97d18362bc5dd85f4c1dee12af932918a43bbb472e98c10bc9f
注意将上面的加入集群的命令保存下来。默认token有效期为24小时,当过期之后,该token不可用。如需重新创建token,如果令牌在生效时间内则不需要创建
[root@k8s-master ~]# kubeadm token create --print-join-command
初始化的输出信息记录了 kubeadm 初始化整个集群的过程,生成相关的各种证书、kubeconfig 文件、bootstraptoken 等等,后边是使用kubeadm join往集群中添加节点时用到的命令,下面的命令是配置如何使用kubectl访问集群的方式:
如果想让其他节点可以使用管理命令需要复制/etc/kubernetes/admin.conf文件并执行下面的命令
[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
从节点加入集群@2@3
#------------------------------@2
[root@k8s-node01 kubeadm_ctl]# kubeadm join 192.168.10.101:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:5046e19d7e6ae97d18362bc5dd85f4c1dee12af932918a43bbb472e98c10bc9f
#-------------------------------@3
[root@k8s-node02 kubeadm_ctl]# kubeadm join 192.168.10.101:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:5046e19d7e6ae97d18362bc5dd85f4c1dee12af932918a43bbb472e98c10bc9f
查看集群信息@1
[root@k8s-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 12m v1.18.0
k8s-node01 NotReady <none> 2m31s v1.18.0
k8s-node02 NotReady <none> 2m28s v1.18.0
查看组件状态
#componentstatuses 组件状态缩写cs
[root@k8s-master ~]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
查看证书签名请求
# certificatesigningrequests 证书签名请求缩写csr
[root@k8s-master ~]# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
csr-5bf9h 6m49s kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:abcdef Approved,Issued
csr-rmpw4 6m46s kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:abcdef Approved,Issued
如果集群安装过程中遇到问题,可以使用下面的命令来进行重置:
kubeadm reset
# 将所有节点都重置,然后重新安装
ifconfig cni0 down && ip link delete cni0
rm -rf /var/lib/cni/
Kubectl命令 自动补全
Kubectl 命令提供了自动补全功能,需要手动安装这个功能:
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc
安装 Pod Network
选择安装flannel网络插件,和安装普通的 POD 一样:
分发镜像master
quay.io-coreos-flannel–v0.13.0是微软公司的一个镜像包,在阿里云找不到
[root@k8s-master ~]# scp /root/kubeadm.1.18.img/quay.io-coreos-flannel--v0.13.0 192.1668.10.102:/root/
[root@k8s-master ~]# scp /root/kubeadm.1.18.img/quay.io-coreos-flannel--v0.13.0 192.1668.10.103:/root/
镜像导入node01/2
[root@k8s-node01 ~]# docker load < quay.io-coreos-flannel--v0.13.0
[root@k8s-node02 ~]# docker load < quay.io-coreos-flannel--v0.13.0
方法一:在线下载镜像包(下载的文件中的所需镜像会比较新版本)
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# 此配置文件里的镜像需要科学上网才能下载,可以选择使 用离线的镜像包
方法二:使用离线包
[root@k8s-master ~]# find * -name "kube-flannel.yml"
kubeadm.1.18.img/kube-flannel.yml
[root@k8s-master ~]# kubectl apply -f kubeadm.1.18.img/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
注意:如果节点有多个网卡的话,需要在 kube-flannel.yml中使用–iface参数指定集群主机内网网卡的名称,否则可能会出现 dns 无法解析。flanneld 启动参数加上–iface=
args:
- --ip-masq
- --kube-subnet-mgr
- --iface=eth0
安装完成后使用 kubectl get pods 命令可以查看到我们集群中的组件运行状态,如果都是Running 状态的话,那么恭喜你,你的 master 节点安装成功了。
-A 查看所有命名空间:所有要为Running
kubectl的常用命令
kubectl describe:查看描述信息
查看命名空间#kubectl get ns #ns==namespaces
查看一个pod的命名空间的信息#kubectl get pod -n kube-system
查看一个pod的命名空间的详细信息#kubectl get pod -n kube-system -o wide
查看一个pod的命名空间的详细信息并进行实时追踪#kubectl get pod -n kube-system -o wide -w:查看一个pod的命名空间的详细信息并进行实时追踪
如果出错了,则可以查看日志信息#kubectl describe pod kube-flannel-ds-nlzw4 -n kube-system
-A等于–all-namespaces
[root@k8s-master ~]# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-7ff77c879f-26lxb 1/1 Running 0 27m
kube-system coredns-7ff77c879f-j2lj6 1/1 Running 0 27m
kube-system etcd-k8s-master 1/1 Running 0 27m
kube-system kube-apiserver-k8s-master 1/1 Running 0 27m
kube-system kube-controller-manager-k8s-master 1/1 Running 0 27m
kube-system kube-flannel-ds-785q2 1/1 Running 0 3m19s
kube-system kube-flannel-ds-h6qw2 1/1 Running 0 3m19s
kube-system kube-flannel-ds-wzxvw 1/1 Running 0 3m19s
kube-system kube-proxy-9dl5s 1/1 Running 0 18m
kube-system kube-proxy-hxkjz 1/1 Running 0 27m
kube-system kube-proxy-xwz9b 1/1 Running 0 18m
kube-system kube-scheduler-k8s-master 1/1 Running 0 27m
如果中间有出错的镜像不是running可以使用瞎看详细日志信息进行排错
#语法: kubectl describe pod (pod名称) -n (命名空间namespace)
如:
[root@k8s-master ~]# kubectl describe pod kube-proxy-xwz9b -n kube-system
网络组建安装完成后此时的节点状态为全部都为Ready
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 29m v1.18.0
k8s-node01 Ready <none> 19m v1.18.0
k8s-node02 Ready <none> 19m v1.18.0
搭建Dashboard插件
是一个k8s集群的一个Web UI管理工具
更改配置文件为了可以使用客户端访问
@1
vim /root/kubeadm.1.18.img/recommended.yaml
#前方为行号
40 ports:
41 - port: 443
42 targetPort: 8443 #42下方添加nodePort: 30001
43 nodePort: 30001 #43行下方添加type: NodePort
44 type: NodePort
45 selector:
46 k8s-app: kubernetes-dashboard
#注意缩进
下载镜像文件
#两个镜像文件
链接:https://pan.baidu.com/s/1M9U3R9qtirf6_pz7dQsOYQ
提取码:feng
复制这段内容后打开百度网盘手机App,操作更方便哦
[root@k8s-master ~]# kubectl create -f /root/kubeadm.1.18.img/recommended.yaml
创建成功后会自动下载镜像,一般下载不成功
可以查看调度详细信息,自行导入镜像
[root@k8s-master ~]# kubectl get pods -A -o wide | grep dashboard
kubernetes-dashboard dashboard-metrics-scraper-6b4884c9d5-2p2d9 0/1 ContainerCreating 0 59s <none> k8s-node02 <none> <none>
kubernetes-dashboard kubernetes-dashboard-7f99b75bf4-krgv5 0/1 ContainerCreating 0 59s <none> k8s-node01 <none> <none>
#可以查看到dashboard-metrics调度到node02,kubernetes-dashboard调度到了node01
镜像导入@2@3
@2
[root@k8s-node01 ~]# docker load < kubernetesui-metrics-scraper_v1.0.4
57757cd7bb95: Loading layer 238.6kB/238.6kB
14f2e8fb1e35: Loading layer 36.7MB/36.7MB
52b345e4c8e0: Loading layer 2.048kB/2.048kB
Loaded image: kubernetesui/metrics-scraper:v1.0.4
@3
[root@k8s-node02 ~]# docker load < kubernetesui-dashboard_v2.0.3
3019294f9e33: Loading layer 227.4MB/227.4MB
Loaded image: kubernetesui/dashboard:v2.0.3
导入完成后进行查看@1
[root@k8s-master ~]# kubectl get pods -A -o wide | grep dashboard
kubernetes-dashboard dashboard-metrics-scraper-6b4884c9d5-2p2d9 1/1 Running 0 6m5s 10.244.2.2 k8s-node02 <none> <none>
kubernetes-dashboard kubernetes-dashboard-7f99b75bf4-krgv5 1/1 Running 0 6m5s 10.244.1.2 k8s-node01 <none> <none>
[root@k8s-master ~]# kubectl get deployments.apps -n kubernetes-dashboard
NAME READY UP-TO-DATE AVAILABLE AGE
dashboard-metrics-scraper 1/1 1 1 8m23s
kubernetes-dashboard 1/1 1 1 8m23s
[root@k8s-master ~]# kubectl get service -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.100.142.193 <none> 8000/TCP 9m22s
kubernetes-dashboard NodePort 10.108.227.53 <none> 443:30001/TCP 9m22s
[root@k8s-master ~]# ss -tnl | grep 30001
LISTEN 0 128 *:30001 *:*
外部网络可以通过https://192.168.10.102:30001进行访问
创建用户并授予admin角色绑定
[root@k8s-master ~]# kubectl create sa dashboard-admin -n kube-system
serviceaccount/dashboard-admin created
#创建用户认证
[root@k8s-master ~]#
[root@k8s-master ~]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created
#生成token
[root@k8s-master ~]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
Name: dashboard-admin-token-fm7l8
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: dashboard-admin
kubernetes.io/service-account.uid: b2e4f1a0-c192-4bb6-8c9a-d2665cc02d27
Type: kubernetes.io/service-account-token
Data
====
token: #下方复制到浏览器认证-------------------------- eyJhbGciOiJSUzI1NiIsImtpZCI6Im1KbUotQV8zdlllUWVuX0JrNFppdDJGOTFXc21RNVhrOTJyYXZEbjJGZ2MifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tZm03bDgiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiYjJlNGYxYTAtYzE5Mi00YmI2LThjOWEtZDI2NjVjYzAyZDI3Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.jDDyAT38GjZNYviLFMCvKeERcsGfACOQ9LdUP3z_8rfwQwq-d5U-AOW1Kl-v-y7lu9HWEKepYHp2WdGGSF-3ajouAfx_QVpBXIgbZfYFwuGRshFqBAsudQrTMEeTyEONC2sPs3ZlplFGNi7hNYAULK4AaHpKTfsmNP7tK8piCSqsHAaL2XoJiEtx9UagxpVQt3cvMF45Pk8xwIkhoOnDYl6VTbKHtkUJwFn8kAWKK8NreQptyco8cf0FvDa5Zg_iqrkKJzwwzB5IvLzTC21H9_D-SgV8-X0vyUwsKEa7R7zjwAIzJxwmlg6ffA7rfJd-7v8C_OKJiJ3QJiFsBSdVGA
#------------------------------------------------------------
ca.crt: 1025 bytes
namespace: 11 bytes
复制到浏览器使用token进行登陆
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传
如果想要删除Dashboard则执行
[root@k8s-master ~]# kubectl delete -f /root/kubeadm.1.18.img/recommended.yaml
#会删除此yaml文件生成的所有东西
更多推荐
所有评论(0)