源资料:

https://www.yuque.com/leifengyang/oncloud/ghnb83#AGHOX

https://www.bilibili.com/video/BV13Q4y1C7hS?p=26&spm_id_from=333.1007.top_right_bar_window_history.content.click

Kubernets特点

  • 服务发现和负载均衡
    kubernetes可以使用DNS名称或者自己的IP公开容器,如果进入容器的流量很大,Kubernetes可以负载均衡并分配网络流量,从而使部署稳定。

  • 存储编排
    Kubernets允许您自动挂载你选择的存储系统,例如本地存储、公共云提供商

  • 自动部署和回滚

  • 自动装箱计算
    Kubernetes允许你指定每个容器所需要的CPU和内存(RAM)。当容器指定了资源请求时,KUber呢特殊可以做出更好的角色来管理容器的资源。

  • 自我修复
    Kubernetes重新启动失败的容器,替换容器,杀死不响应用户定义的运行状态检查的容器,并且在准备好服务之前不将其通告给客户端。

    • 秘钥与配置管理—类似于springcloud中的配置中心

    Kubernetes为你提供了一个可弹性运行分布式系统的框架,Kubernetes会满足你的扩展要求,故障转移,部署模式。

例如Kubernetes可以轻松管理系统的Canary部署–灰度部署
还可以自定义插件对Kubernetes进行扩展

K8s集群的架构方式

Kubernetes Cluster=N Master Node+N Worker Node: N主节点+N工作节点; N>=1
在这里插入图片描述

在这里插入图片描述
在这里插入图片描述

Kubernetes集群部署

部署流程说明

1.三台机器,每台机器安装docker
2.每台机器安装厂长-kubelet
3.每台机器安装bubectl 和 kubeadm---用来协助程序员
  发号命令,就是通过kubectl--kubectl只需要安装到总部,因为只有总部才会发号施令(图中显示三台都装了,其他装一台就可以了)
  kubeadm--用于程序员管理集群的,帮助程序员快速搭建k8s集群---集群搭建完,删除也是可以的
4.选择一台机器为主节点(这里选择第二台机器),在上面执行kubeadm init,这样就初始化出一个主节点了。然后由该节点上的厂长(kubelet)来安装如下镜像
  scheduler,bube-proxy,api-server,etcd,controller-manager,并将他们启动起来

   这样master节点就好了
 4.其他节点(例如第一个节点)执行kubeadm join 命令,  加入集群。这样该节点中的kubelet就会给该节点安装一个门卫(kube-proxy)
   

在这里插入图片描述

安装

1.centos-1、centos-2、centos-3 全部安装了docker
要求docker版本是 Docker version 19.03.13, build 4484c46d9d

2.环境检查

机器要求:
每台机器2GB或者更多RAM
2CPU核或更多
集群中所有机器的网络彼此均能够互相连接(公网和内网都可以)

设置防火墙放行规则:节点之中不可以有重复的主机名、MAC地址或者product_uuid
可以采用hostnamectl set-hostname 名称 来设置主机名称
设置不同的hostname:开启机器上的某些端口
内网互信:禁止交换分区,为了保证kubelet正常工作,你必须禁用交换分区

先来查看下分区

[root@CentOS7-1 yanweiling]# free -m
total used free shared buff/cache available
Mem: 1819 566 789 16 462 1088
Swap: 2047 0 2047

只有当swap 是0 0 0以后,才是表示关闭了,说明当前还没有关闭

1. 环境准备操作

确保每个节点上 MAC 地址和 product_uuid 的唯一性

#网络接口的MAC地址
ip link

#product_uuid
cat /sys/class/dmi/id/product_uuid

网卡地址
在这里插入图片描述
或者通过 ip addr查看,或者通过dmesg | grep eth 查看网卡linux开机信息都可以通过dmesg查看

接下来在以上三台机器上执行如下操作

#各个机器设置自己的域名,三台机器分别起名为centos7-master、centos7-slave2、centos7-slave3
hostnamectl set-hostname xxx

# 关闭防火墙
systemctl disable firewalld
systemctl stop firewalld

#将SELinux设置为 permissive 模式(相当于将其禁用)
sudo setenforce 0    #临时禁用
sudo sed -i 's/SELINUX=enforcing/SELINUX=permissive/' /etc/selinux/config  #改配置文件,永久禁用


#关闭swap
swapoff  -a  #临时关闭
sed -ri 's/.*swap.*/#&/' /etc/fstab     #永久关闭swap分区

#允许iptables检查桥接流量
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720         
EOF

sudo sysctl --system

把centos-1 当做master节点

2. 安装kubelet(厂长)、kubeadm(引导创建集群的)、kubectl(命令工具)
## 配置kubernets下载地址
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
   http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF

## 安装三大件	
sudo yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9 --disableexcludes=kubernetes

## 让厂长就位
sudo systemctl enable --now kubelet

查看kubelet状态,发现一会运行,一会退出
systemctl status kubelet

因为kubelet 现在每隔几秒就会重启,因为它陷入了一个等待 kubeadm 指令的死循环

3. 使用kubeadm引导集群
1. 下载各个机器需要的镜像

只需要在master节点 centos-1上执行

## 新增shell脚本
sudo tee ./images.sh <<-'EOF'
#!/bin/bash
images=(
kube-apiserver:v1.20.9
kube-proxy:v1.20.9
kube-controller-manager:v1.20.9
kube-scheduler:v1.20.9
coredns:1.7.0
etcd:3.4.13-0
pause:3.2
)
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/$imageName
done
EOF
   # 执行脚本,并按照指定的镜像文件进行下载
chmod +x ./images.sh && ./images.sh
2. 其他从节点

只需要拉取镜像kube-proxy:v1.20.9即可

实际操作:其他机器也按照master上一样,把所有镜像都拉下来了

3. 初始化主节点

所有机器都执行

#所有机器添加master域名映射,以下需要修改为自己的
echo "192.168.159.133 cluster-endpoint" >> /etc/hosts

添加完以后,ping cluster-endpoint,三台机器就都可以ping通了

只在master上执行

#主节点初始化
kubeadm init \
--apiserver-advertise-address=192.168.159.133 \
--control-plane-endpoint=cluster-endpoint \
--image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
--kubernetes-version v1.20.9 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=192.168.0.0/16  # ##这个为集群内pod的网络设置,之后我用的网络插件为calico,同时需要把这个网络地址段配置到calico中

#所有网络范围不重叠

执行以后,成功后会显示

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join cluster-endpoint:6443 --token y0ys5t.2ukg15kecdna46pf \
    --discovery-token-ca-cert-hash sha256:37f8a4703630501df8e55fbe6ac1c075df022d14a7032d19a57f47b04b8a46dc \
    --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join cluster-endpoint:6443 --token y0ys5t.2ukg15kecdna46pf \
    --discovery-token-ca-cert-hash sha256:37f8a4703630501df8e55fbe6ac1c075df022d14a7032d19a57f47b04b8a46dc 

  1. 按照返回的提示,在mastetr上执行第一步
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
#查看集群所有节点
kubectl get nodes

例如:
[root@CentOS7-1 yanweiling]# kubectl get nodes
NAME        STATUS     ROLES                  AGE    VERSION
centos7-1   NotReady   control-plane,master   9m5s   v1.20.9

之所以是notready,是因为我们没有deploy a pod network to the cluster。

我们需要一个pod网络插件,把k8s集群中所有机器的网络全部打通,全部串联起来

k8s支持很多插件,参考地址 https://kubernetes.io/docs/concepts/cluster-administration/addons/

  1. 安装网络组建
    calico安装,在master节点上运行
# 下载calico配置文件
curl https://docs.projectcalico.org/manifests/calico.yaml -O

kubectl apply -f calico.yaml

注意:在calico.yaml中,有个ip地址:192.168.0.0,这个地址,就是初始化master节点时设置的pod-network-cidr属性值

[root@CentOS7-1 yanweiling]# cat calico.yaml | grep 192.168.0.0
# value: “192.168.0.0/16”

状态查看
# 查看集群所有节点
kubectl get nodes
#根据配置文件,给集群创建资源
kubectl apply -f xxxx.yaml

#查看集群部署了哪些应用?
docker ps   ===   kubectl get pods -A
# 运行中的应用在docker里面叫容器,在k8s里面叫Pod
kubectl get pods -A

------
例如
[root@CentOS7-1 yanweiling]# kubectl get pods -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-6fcb5c5bcf-24lws   1/1     Running   0          7m16s
kube-system   calico-node-j6gbs                          1/1     Running   0          7m16s
kube-system   coredns-5897cd56c4-8rh4h                   1/1     Running   0          25m
kube-system   coredns-5897cd56c4-zlmxm                   1/1     Running   0          25m
kube-system   etcd-centos7-1                             1/1     Running   0          25m
kube-system   kube-apiserver-centos7-1                   1/1     Running   0          25m
kube-system   kube-controller-manager-centos7-1          1/1     Running   0          25m
kube-system   kube-proxy-nqvc4                           1/1     Running   0          25m
kube-system   kube-scheduler-centos7-1                   1/1     Running   0          25m
[root@CentOS7-1 yanweiling]# 

在master上查看各个应用都已经running以后,就可以继续往下操作了

4. 从节点加入k8s主节点
kubeadm join cluster-endpoint:6443 --token y0ys5t.2ukg15kecdna46pf \
    --discovery-token-ca-cert-hash sha256:37f8a4703630501df8e55fbe6ac1c075df022d14a7032d19a57f47b04b8a46dc 

成功后,显示
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

此时可以在主节点上查看

[root@CentOS7-1 yanweiling]# kubectl get nodes
NAME        STATUS   ROLES                  AGE     VERSION
centos7-1   Ready    control-plane,master   45m     v1.20.9
centos7-2   Ready    <none>                 79s     v1.20.9
centos7-3   Ready    <none>                 4m28s   v1.20.9

过一会新加的容器pod全部就运行起来了,通过命令查看
watch -n 1 kubectl get pods -A 每隔一秒就执行下kubectl get pods -A这个命令
kube-system   calico-kube-controllers-6fcb5c5bcf-wttnv   1/1     Running   0          47m
kube-system   calico-node-hp6xh                          1/1     Running   0          7m2s
kube-system   calico-node-p85d5                          1/1     Running   0          10m
kube-system   calico-node-tcncs                          1/1     Running   0          47m
kube-system   coredns-5897cd56c4-cpvqc                   1/1     Running   0          50m
kube-system   coredns-5897cd56c4-h5bcx                   1/1     Running   0          50m
kube-system   etcd-centos7-1                             1/1     Running   0          51m
kube-system   kube-apiserver-centos7-1                   1/1     Running   0          51m
kube-system   kube-controller-manager-centos7-1          1/1     Running   0          51m
kube-system   kube-proxy-6ld55                           1/1     Running   0          10m
kube-system   kube-proxy-qtcsb                           1/1     Running   0          50m
kube-system   kube-proxy-sxjjn                           1/1     Running   0          7m2s
kube-system   kube-scheduler-centos7-1                   1/1     Running   0          51m

自恢复功能

当三台机器重启以后,执行命令

kubectl get pods -A
kubectl get node
发现k8s已经恢复了

生成新的token

kubeadm token create --print-join-command

高可用部署方式,也是在这一步的时候,使用添加主节点的命令即可

卸载k8s

sudo kubeadm reset -f
sudo rm -rvf $HOME/.kube
sudo rm -rvf ~/.kube/
sudo rm -rvf /etc/kubernetes/
sudo rm -rvf /etc/systemd/system/kubelet.service.d
sudo rm -rvf /etc/systemd/system/kubelet.service
sudo rm -rvf /usr/bin/kube*
sudo rm -rvf /etc/cni
sudo rm -rvf /opt/cni
sudo rm -rvf /var/lib/etcd
sudo rm -rvf /var/etcd
sudo yum remove kube*

K8s停止

	CMD=stop
	systemctl $CMD etcd
	echo "---------- $CMD: kube-apiserver --------"
	systemctl $CMD kube-apiserver
	echo "---------- $CMD: kube-controller-manager --------"
	systemctl $CMD kube-controller-manager
	echo "---------- $CMD: kube-scheduler --------"
	systemctl $CMD kube-scheduler
	echo "---------- $CMD: kubelet--------"
	systemctl $CMD kubelet
	echo "---------- $CMD: kube-proxy--------"
	systemctl $CMD kube-proxy

安装可视化界面 dashboard

1 .部署

  1. 下载dashboard配置资源

必须在master节点执行

kubernetes官方提供的可视化界面
https://github.com/kubernetes/dashboard

#下载dashboard资源文件
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml

如果下载太慢,没有进度,可以先下载

yum install -y wget
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
  1. 修改配置
---
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort #增加
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30000 #增加
  selector:
    k8s-app: kubernetes-dashboard
---
#因为自动生成的证书很多浏览器无法使用,所以我们自己创建,注释掉kubernetes-dashboard-certs对象声明
#apiVersion: v1
#kind: Secret
#metadata:
#  labels:
#    k8s-app: kubernetes-dashboard
#  name: kubernetes-dashboard-certs
#  namespace: kubernetes-dashboard
#type: Opaque
---

  1. 创建证书
mkdir dashboard-certs

cd dashboard-certs/

#创建命名空间
kubectl create namespace kubernetes-dashboard

# 创建key文件
openssl genrsa -out dashboard.key 2048

#证书请求
openssl req -days 36000 -new -out dashboard.csr -key dashboard.key -subj '/CN=dashboard-cert'

#自签证书
openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt

#创建kubernetes-dashboard-certs对象
kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kubernetes-dashboard
  1. 安装dashboard

kubectl create -f recommended.yaml

可能会报 Error from server (AlreadyExists): error when creating “./recommended.yaml”: namespaces “kubernetes-dashboard” already exists , 直接忽略

5.查看安装结果

[root@centos7-master yanweiling]# kubectl get pods -A  -o wide
NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE    IP                NODE             NOMINATED NODE   READINESS GATES
kube-system            calico-kube-controllers-6fcb5c5bcf-99g9m     1/1     Running   0          5h8m   192.168.108.66    centos7-master   <none>           <none>
kube-system            calico-node-68wkx                            1/1     Running   0          5h5m   192.168.159.134   centos7-slave2   <none>           <none>
kube-system            calico-node-cq576                            1/1     Running   0          5h8m   192.168.159.133   centos7-master   <none>           <none>
kube-system            coredns-5897cd56c4-5j5jv                     1/1     Running   0          5h9m   192.168.108.65    centos7-master   <none>           <none>
kube-system            coredns-5897cd56c4-ns9cj                     1/1     Running   0          5h9m   192.168.108.67    centos7-master   <none>           <none>
kube-system            etcd-centos7-master                          1/1     Running   0          5h9m   192.168.159.133   centos7-master   <none>           <none>
kube-system            kube-apiserver-centos7-master                1/1     Running   0          5h9m   192.168.159.133   centos7-master   <none>           <none>
kube-system            kube-controller-manager-centos7-master       1/1     Running   0          5h9m   192.168.159.133   centos7-master   <none>           <none>
kube-system            kube-proxy-b5szj                             1/1     Running   0          5h5m   192.168.159.134   centos7-slave2   <none>           <none>
kube-system            kube-proxy-kgc5r                             1/1     Running   0          5h9m   192.168.159.133   centos7-master   <none>           <none>
kube-system            kube-scheduler-centos7-master                1/1     Running   0          5h9m   192.168.159.133   centos7-master   <none>           <none>
kubernetes-dashboard   dashboard-metrics-scraper-6c89cc8995-7p64w   1/1     Running   0          16m    192.168.108.71    centos7-master   <none>           <none>
kubernetes-dashboard   kubernetes-dashboard-849c586d57-7zj5x        1/1     Running   0          16m    192.168.108.72    centos7-master   <none>           <none>

[root@centos7-master yanweiling]# kubectl get service -n kubernetes-dashboard  -o wide
NAME                        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE   SELECTOR
dashboard-metrics-scraper   ClusterIP   10.96.118.85   <none>        8000/TCP        18m   k8s-app=dashboard-metrics-scraper
kubernetes-dashboard        NodePort    10.96.9.93     <none>        443:30000/TCP   18m   k8s-app=kubernetes-dashboard

6.浏览器访问

https://master主机公网ip:30000

在这里插入图片描述
这里需要输入token,token从哪里来呢?

token生成

1.创建dashboard管理员

vim dashboard-admin.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: dashboard-admin
  namespace: kubernetes-dashboard

2.保存后,执行

kubectl create -f ./dashboard-admin.yaml

3.为用户分配权限

vim dashboard-admin-bind-cluster-role.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: dashboard-admin-bind-cluster-role
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: dashboard-admin
  namespace: kubernetes-dashboard

4.执行

kubectl create -f ./dashboard-admin-bind-cluster-role.yaml

5.查看并复制token

kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep dashboard-admin | awk ‘{print $1}’)

例如

[root@binghe101 ~]# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep dashboard-admin | awk '{print $1}')
Name:         dashboard-admin-token-p8tng
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: c3640b5f-cd92-468c-ba01-c886290c41ca

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IlVsRVBqTG5RNC1oTlpDS2xMRXF2cFIxWm44ZXhWeXlBRG5SdXpmQXpDdWcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tcDh0bmciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiYzM2NDBiNWYtY2Q5Mi00NjhjLWJhMDEtYzg4NjI5MGM0MWNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.XOrXofgbk5EDa8COxOkv31mYwciUGXcBD9TQrb6QTOfT2W4eEpAAZUzKYzSmxLeHMqvu_IUIUF2mU5Lt6wN3L93C2NLfV9jqaopfq0Q5GjgWNgGRZAgsuz5W3v_ntlKz0_VW3a7ix3QQSrEWLBF6YUPrzl8p3r8OVWpDUndjx-OXEw5pcYQLH1edy-tpQ6Bc8S1BnK-d4Zf-ZuBeH0X6orZKhdSWhj9WQDJUx6DBpjx9DUc9XecJY440HVti5hmaGyfd8v0ofgtdsSE7q1iizm-MffJpcp4PGnUU3hy1J-XIP0M-8SpAyg2Pu_-mQvFfoMxIPEEzpOrckfC1grlZ3g

安装dashboard以后,一直访问不到

[root@CentOS7-1 yanweiling]#  kubectl get pod -A
NAMESPACE              NAME                                         READY   STATUS             RESTARTS   AGE
kube-system            calico-kube-controllers-6fcb5c5bcf-wttnv     1/1     Running            1          99m
kube-system            calico-node-hp6xh                            1/1     Running            1          58m
kube-system            calico-node-p85d5                            1/1     Running            1          62m
kube-system            calico-node-tcncs                            1/1     Running            1          99m
kube-system            coredns-5897cd56c4-cpvqc                     1/1     Running            1          102m
kube-system            coredns-5897cd56c4-h5bcx                     1/1     Running            1          102m
kube-system            etcd-centos7-1                               1/1     Running            1          102m
kube-system            kube-apiserver-centos7-1                     1/1     Running            1          102m
kube-system            kube-controller-manager-centos7-1            1/1     Running            1          102m
kube-system            kube-proxy-6ld55                             1/1     Running            1          62m
kube-system            kube-proxy-qtcsb                             1/1     Running            1          102m
kube-system            kube-proxy-sxjjn                             1/1     Running            1          58m
kube-system            kube-scheduler-centos7-1                     1/1     Running            1          102m
kubernetes-dashboard   dashboard-metrics-scraper-79c5968bdc-rl9rj   1/1     Running            0          24m
kubernetes-dashboard   kubernetes-dashboard-658485d5c7-dct8k        0/1     CrashLoopBackOff   8          24m
[root@CentOS7-1 yanweiling]# kubectl logs kubernetes-dashboard-658485d5c7-dct8k  -n kubernetes-dashboard
2022/04/25 12:23:18 Starting overwatch
2022/04/25 12:23:18 Using namespace: kubernetes-dashboard
2022/04/25 12:23:18 Using in-cluster config to connect to apiserver
2022/04/25 12:23:18 Using secret token for csrf signing
2022/04/25 12:23:18 Initializing csrf token from kubernetes-dashboard-csrf secret
panic: Get "https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf": dial tcp 10.96.0.1:443: i/o timeout

goroutine 1 [running]:
github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0xc00000d7a0)
	/home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:41 +0x413
github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...)
	/home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:66
github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0xc000505c00)
	/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:502 +0xc6
github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0xc000505c00)
	/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:470 +0x47
github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...)
	/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:551
main.main()
	/home/runner/work/dashboard/dashboard/src/app/backend/dashboard.go:95 +0x21c
[root@CentOS7-1 yanweiling]# 

可以看到报错原因为:Get “https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf”: dial tcp 10.96.0.1:443: i/o timeout

解决方法:
需要停掉dashboard,停止以后,修改recommended.yaml的地方,增加如下两点的修改
其实就是将dashboard安装到master节点

在以下截图处增加nodeName:centos7-master上(这个为自己master节点的名字)
在这里插入图片描述
在这里插入图片描述
设置完主机域名以后,再按照上面的描述操作执行即可;

删除dashboard

1.查询pod

kubectl get pods --all-namespaces | grep “dashboard”

2.删除pod

kubectl delete deployment kubernetes-dashboard --namespace=kubernetes-dashboard
kubectl delete deployment dashboard-metrics-scraper --namespace=kubernetes-dashboard

3.查询service

kubectl get service -A

4.删除service

kubectl delete service kubernetes-dashboard --namespace=kubernetes-dashboard
kubectl delete service dashboard-metrics-scraper --namespace=kubernetes-dashboard

5.删除账号和秘钥

kubectl delete sa kubernetes-dashboard --namespace=kubernetes-dashboard
kubectl delete secret kubernetes-dashboard-certs --namespace=kubernetes-dashboard
kubectl delete secret kubernetes-dashboard-key-holder --namespace=kubernetes-dashboard

6.查看命名空间

kubectl get ns

7.删除命名空间

kubectl delete ns kubernetes-dashboard

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐