前言

首先确定,多节点的部署是建立在单节点部署的环境基础之上的,为后面进行单点故障的问题进行容灾。

文件拷贝

将单节点时master上的需要的文件目录进行拷贝,到备用节点上。
其中包括kubernetes工作目录,三个组价的启动脚本,etcd证书目录(如果没有etcd证书服务启动不了)

[root@localhost ~]# scp -r /opt/kubernetes/ root@192.168.10.11:/opt	#k8s工作目录

[root@localhost ~]# scp /usr/lib/systemd/system/{kube-apiserver.service,kube-controller-manager.service,kube-scheduler.service} root@192.168.10.11:/usr/lib/systemd/system/	#k8s三个组件的启动脚本

[root@localhost ~]# scp -r /opt/etcd/ root@192.168.10.11:/opt/	#etcd证书目录

master2节点上面的配置

因为k8s配置文件中的kube-apiserver配置文件中使用的是本机的IP地址,所以传过来是无法使用的,所以要将IP地址修改为自己的IP地址。

KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.10.21:2379,https://192.168.10.12:2379,https://192.168.10.13:2379 \
--bind-address=192.168.10.11 \
--secure-port=6443 \
--advertise-address=192.168.10.11 \
……

开启服务

根据传过来的三个服务组件的启动脚本,进行服务的启动。

  • 开启服务,开启开机自启动
systemctl start kube-apiserver.service
systemctl enable kube-apiserver.service
systemctl status kube-apiserver.service

systemctl start kube-controller-manager.service
systemctl enable kube-controller-manager.service
systemctl status kube-controller-manager.service

systemctl start kube-scheduler.service
systemctl enable kube-scheduler.service
systemctl status kube-scheduler.servic
  • 添加环境变量,使kubectl命令生效
vim /etc/profile
export PATH=$PATH:/opt/kubernetes/bin/
  • 服务启动之后查看群集节点状态
[root@server1 cfg]# kubectl get node
NAME            STATUS   ROLES    AGE    VERSION
192.168.10.12   Ready    <none>   135m   v1.17.16-rc.0
192.168.10.13   Ready    <none>   78m    v1.17.16-rc.0

并且此时master主节点的节点状态也和备节点相同,表明多节点配置完成。

nginx负载均衡部署

  • 首先先配置nginx的下载安装包
vim /etc/yum.repo.d/nginx.repo
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/7/$basearch/
gpgcheck=0

yum list	#加载yum列表
yum -y list nginx
vim /etc/nginx/nginx.conf
#在event模块和http模块间插入stream模块
stream {

   log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
    access_log  /var/log/nginx/k8s-access.log  main;	#指定日志目录
    
    upstream k8s-apiserver {		#此处为两个nginx地址池
        server 192.168.10.21:6443;		#端口为6443
        server 192.168.10.11:6443;
    }                 
    server { 
                listen 6443;
                proxy_pass k8s-apiserver;	#反向代理指向nginx地址池
    }
    }

nginx -t  	#进行nginx配置文件的语法检查
systemctl start nginx

两个nginx负载均衡节点上面的配置相同

如果想要查看nginx配置完成情况,可以进入到文件/usr/share/nginx/html/index.html里面进行配置文件的特性化修改。

keepalived的安装配置

yum -y install keepalived
  • 修改配置文件
vim keepalived
! Configuration File for keepalived

global_defs {
   # 接收邮件地址
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   # 邮件发送地址
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_MASTER
}

vrrp_script check_nginx {               #定义check_nginx函数 nginx检测脚本(>此项保证了keepalived与nginx的关联)
    script "/etc/nginx/check_nginx.sh"
}

vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
    priority 100    # 优先级,备服务器设置 90
    advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.10.100/24
    }
    track_script {                      #监控的促发脚本
        check_nginx                     #函数名
    }
}
  • 配置check_nginx.sh脚本文件,旨在当节点服务宕掉之后,keepalived服务也会被停止掉,所以漂移地址会直接飘移到另一个nginx服务器上,实现负载均衡,故障迁移的作用。
vim /etc/nginx/check_nginx.sh
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")  #过滤nginx进程数量 

#如果nginx终止了,keepalived同时停止
if [ "$count" -eq 0 ];then
    systemctl stop keepalived
fi
  • 对迁移脚本进行执行权限授权
chmod -x /etc/nginx/check_nginx.sh
  • 开启服务,验证群集功能
systemctl start keepalived
systemctl status keepalived.service
  • 通过ip addr来查看此时的vip地址在哪一台主机上,然后关闭漂移地址所在的主机的nginx服务,由于脚本的原因,会停掉该节点上的keepalived服务
systemctl stop nginx.service
ip addr
systemctl start nginx
systemctl start keepalived	#因为nginx关的时候keepalived也被关了,所以也要重启一下
ip addr	# 漂移地址又回到了之前的节点主机

所以keepalived服务安装完后并且能够达到双机热备的效果的作用。

两node节点同步到虚拟地址

[root@server2 cfg]# vim bootstrap.kubeconfig
[root@server2 cfg]# vim kubelet.kubeconfig
[root@server2 cfg]# vim kube-proxy.kubeconfig
# 进入到配置文件将server主机修改为虚拟地址

[root@server2 cfg]# grep 100 *
bootstrap.kubeconfig:    server: https://192.168.10.100:6443
kubelet.kubeconfig:    server: https://192.168.10.100:6443
kube-proxy.kubeconfig:    server: https://192.168.10.100:6443
  • 重启kubelet与kube-proxy服务
[root@server2 cfg]# systemctl restart kubelet.service 
[root@server2 cfg]# systemctl restart kube-proxy.service 
  • 通过此时虚拟地址所在的主机上进行nginx的k8s的日志查询,可以看到调度信息。
[root@kvm ~]# tail /var/log/nginx/k8s-access.log
192.168.10.12 192.168.10.21:6443 - [04/Mar/2021:21:45:36 +0800] 200 1542
192.168.10.13 192.168.10.21:6443 - [04/Mar/2021:21:45:36 +0800] 200 1541
192.168.10.13 192.168.10.21:6443 - [04/Mar/2021:21:45:40 +0800] 200 7901
192.168.10.12 192.168.10.11:6443 - [04/Mar/2021:21:45:44 +0800] 200 7530
192.168.10.12 192.168.10.11:6443 - [04/Mar/2021:21:46:14 +0800] 200 1182
192.168.10.13 192.168.10.21:6443 - [04/Mar/2021:21:46:14 +0800] 200 1664
192.168.10.13 192.168.10.11:6443 - [04/Mar/2021:21:46:14 +0800] 200 1182
192.168.10.12 192.168.10.11:6443 - [04/Mar/2021:21:46:14 +0800] 200 1183
192.168.10.13 192.168.10.11:6443 - [04/Mar/2021:21:46:14 +0800] 200 1183
192.168.10.12 192.168.10.11:6443 - [04/Mar/2021:21:46:14 +0800] 200 1664
  • 通过在master主机上面进行容器的安装进行验证node节点的关联成功
[root@localhost ~]# kubectl get pods	#此时还没有安装容器,所以也不会有pod的产生
No resources found in default namespace.

[root@localhost ~]# kubectl run nginx --image=nginx		# 安装并运行容器
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/nginx created

[root@localhost ~]# kubectl get pods	# 查看pod的信息
NAME                     READY   STATUS    RESTARTS   AGE
nginx-6db489d4b7-zr4s2   1/1     Running   0          2m16s
  • 但是此时无法通过命令查看pod容器中容器运行的日志文件信息情况,不便于我们监控容器的运行状态,所以需要修改权限使得可以查看日志信息。
[root@localhost ~]# kubectl logs nginx-6db489d4b7-zr4s2
Error from server (Forbidden): Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy) ( pods/log nginx-6db489d4b7-zr4s2)

[root@localhost ~]# kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous
clusterrolebinding.rbac.authorization.k8s.io/cluster-system-anonymous created
  • 此时因为刚刚创建pod容器,所以没有访问的日志信息,所以首先通过以下指令查看此时pod容器运行在哪个节点之上。
[root@localhost ~]# kubectl get pods -o wide
NAME                     READY   STATUS    RESTARTS   AGE     IP            NODE            NOMINATED NODE   READINESS GATES
nginx-6db489d4b7-zr4s2   1/1     Running   0          7m10s   172.17.61.3   192.168.10.13   <none>           <none>
  • 通过观察可以看出pod运行在192.168.10.13的节点上面。此时该节点上面有pod容器的网关地址172.17.61.1/24
[root@server3 cfg]# ip addr
……
5: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP
    link/ether 02:42:71:fb:18:0b brd ff:ff:ff:ff:ff:ff
    inet 172.17.61.1/24 brd 172.17.61.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:71ff:fefb:180b/64 scope link 
       valid_lft forever preferred_lft forever
       ……
  • 我们可以通过curl 172.17.61.3 访问到容器所提供的服务。
    在这里插入图片描述同时也可以在192.168.10.13节点上的浏览器中通过IP直接访问到该网页。
    在这里插入图片描述同时另一个节点也可以通过IP访问到该网页。因为在做flannel网络的时候,同一个网络中的节点,pod和容器都是可以直接通信的。

  • 然后再进入master节点直接查看此时pod的访问日志。可以看到两个node节点的访问情况。
    在这里插入图片描述

k8sUI界面的设置

目前的操作都是在命令行进行的,若是想要使用web网页进行操作,还需要 进行一系列的操作。

  • 首先在之前的k8s目录下新建一个dashboard的工作目录,用于存放k8sUI界面的各种文件。
 mkdir dashboard

然后进入到目录中,将k8sUI界面所需要的的文件全部拷贝进来。

  • 查看文件
[root@localhost dashboard]# ls
dashboard-configmap.yaml   dashboard-rbac.yaml    dashboard-service.yaml
dashboard-controller.yaml  dashboard-secret.yaml
  • 进行一系列的操作,为了完成UI界面的创建。
[root@localhost dashboard]# kubectl create -f dashboard-rbac.yaml 
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
[root@localhost dashboard]# kubectl create -f dashboard-secret.yaml 
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-key-holder created
[root@localhost dashboard]# kubectl create -f dashboard-configmap.yaml 
configmap/kubernetes-dashboard-settings created
[root@localhost dashboard]# kubectl create -f dashboard-controller.yaml 
serviceaccount/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
[root@localhost dashboard]# kubectl create -f dashboard-service.yaml 
service/kubernetes-dashboard created
  • 完成后查看创建在指定的kube-system命名空间下
[root@localhost ~]#  kubectl get pods -n kube-system
NAME                                    READY   STATUS    RESTARTS   AGE
kubernetes-dashboard-55f9467c4b-phprh   1/1     Running   1          35h
  • 查看如何访问
[root@localhost ~]# kubectl get pods,svc -n kube-system
NAME                                        READY   STATUS    RESTARTS   AGE
pod/kubernetes-dashboard-55f9467c4b-phprh   1/1     Running   1          35h

NAME                           TYPE       CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
service/kubernetes-dashboard   NodePort   10.0.0.214   <none>        443:30001/TCP   35h
  • 可以使用以下命令查看资源分配的位置
[root@localhost ~]# kubectl get pods -n kube-system -o wide
NAME                                    READY   STATUS    RESTARTS   AGE   IP            NODE            NOMINATED NODE   READINESS GATES
kubernetes-dashboard-55f9467c4b-phprh   1/1     Running   1          35h   172.17.77.2   192.168.10.12   <none>           <none>

可以看到此时是分配到IP地址为192.168.10.12 的节点的主机上了。

  • 然后可以通过浏览器进行网页的访问。此时因为使用的443端口号,所以访问的时候,应该是https:192.168.10.12:30001。若是使用谷歌浏览器去访问,会因为谷歌本身颁发的证书不足以通过这样的认可,所以访问是不会通过的,我们需要在dashboard目录下进行一个证书自签的过程。
  • 证书自签
[root@master dashboard]# vim dashboard-cert.sh
cat > dashboard-csr.json <<EOF	#创建json格式的csr签名文件
{
   "CN": "Dashboard",
   "hosts": [],
   "key": {
       "algo": "rsa",
       "size": 2048
   },
   "names": [
       {
           "C": "CN",
           "L": "BeiJing",
           "ST": "BeiJing"
       }
   ]
}
EOF

K8S_CA=$1
#以下产生CA证书
cfssl gencert -ca=$K8S_CA/ca.pem -ca-key=$K8S_CA/ca-key.pem -config=$K8S_CA/ca-config.json -profile=kubernetes dashboard-csr.json | cfssljson -bare dashboard

#删除原本的证书凭据
kubectl delete secret kubernetes-dashboard-certs -n kube-system
#重新创建一个证书凭据
kubectl create secret generic kubernetes-dashboard-certs --from-file=./ -n kube-system
  • 生成证书
    生成的证书的依据来源于/root/k8s/k8s-cert目录下的证书
[root@master dashboard]# bash dashboard-cert.sh /root/k8s/k8s-cert/
  • 编辑dashboard-controller.yaml,指向证书位置,完成证书自签,如下:
[root@master dashboard]# vim dashboard-controller.yaml 
#47行左右,在--auto下插入tls文件目录,指向刚刚生成的证书和密钥文件
        args:
          # PLATFORM-SPECIFIC ARGS HERE
          - --auto-generate-certificates
          - --tls-key-file=dashboard-key.pem
          - --tls-cert-file=dashboard.pem
  • 重新部署
#apply -f 重新部署即可
#之前已经使用create创建完成后,才可以使用apply进行更新
[root@master dashboard]# kubectl apply -f dashboard-controller.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
serviceaccount/kubernetes-dashboard configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
deployment.apps/kubernetes-dashboard configured
  • 并且需要注意,重新部署可能导致资源分配发生主机的变化,所以最好查看一下pod资源的位置
[root@master dashboard]# kubectl get pods -n kube-system -o wide
  • 并且登录到界面的时候,需要通过令牌进行访问,所以此时需要生成一个令牌;
[root@master dashboard]# vim k8s-admin.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: dashboard-admin		#创建dashboard-admin的资源,相当于一个管理员账户
  namespace: kube-system
---
kind: ClusterRoleBinding	#绑定群集用户角色
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: dashboard-admin		#群集用户角色其实就是管理员的身份
subjects:
  - kind: ServiceAccount
    name: dashboard-admin
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
  • 先查看secret的命令空间中的资源
[root@localhost ~]#  kubectl get secret -n kube-system
  • 生成令牌
[root@master dashboard]# kubectl create -f k8s-admin.yaml 
serviceaccount/dashboard-admin created
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created
  • 再次查看secret资源
[root@localhost ~]#  kubectl get secret -n kube-system
NAME                               TYPE                                  DATA   AGE
dashboard-admin-token-xj8s6        kubernetes.io/service-account-token   3      25h
default-token-btvfr                kubernetes.io/service-account-token   3      2d2h
kubernetes-dashboard-certs         Opaque                                11     24h
kubernetes-dashboard-key-holder    Opaque                                2      37h
kubernetes-dashboard-token-2jjzq   kubernetes.io/service-account-token   3      26h
kubernetes-dashboard-token-t2lht   kubernetes.io/service-account-token   3      37h
  • 详细查看令牌dashboard-admin-token-xj8s6 的信息
[root@localhost ~]# kubectl describe secret dashboard-admin-token-xj8s6 -n kube-system
Name:         dashboard-admin-token-xj8s6
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: 4f594e95-dcdd-45ee-9dcf-db3287c86f63

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1359 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6ImdaLXp4T0w3UlJSSHFnT2FQOW1NMG1CZkZCQTB5ekg3YTR2RF91Wk1zS3MifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4teGo4czYiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNGY1OTRlOTUtZGNkZC00NWVlLTlkY2YtZGIzMjg3Yzg2ZjYzIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.e8vekEztfv18ey2imNUAERdiLE_Mm61R4iJ3EmaNQG0fDEPkA_Ll8Pt_hbDJ-p00LbCYmwyyQpAxU7Uh1hQvUyjCMoBfsvANPr0eKF3yYSycmsSZT9pcqjynt6whGIqCgnt1mFtm6AlahWX0RNe7x0xXIi07gdKUiVcFUb_d4p9K9ExdPk1oC2sckH9xCMkSl3qwq6kVFIJST8JLiy0IjR2Aa6NYU2xlBPGRBuPIrzTgcAKXf3Zq-cg1sXnELARSAtzdN_ywnbuSEuykv9ZCmSpw81NFLk7RgsrfFNL1Y_jJ7Tg5-tLmAhaWJZtF4P6OseUORlQnJmdzLdc2mSQCzQ

其中token后面的码就是令牌的密码,复制内容粘贴到web浏览器的令牌输入的部位即可进行正常的登录。

dashboard中生成UI界面的几个文件的内容

在生成UI界面的时候,要保持顺序和要求的顺序一定要一致,不然会出错。

  • ① dashboard-rbac.yaml 角色控制,访问控制资源
kind: Role                              #角色
apiVersion: rbac.authorization.k8s.io/v1        #api版本号(有专门的版本号控制)
metadata:                       #源信息
  labels:
    k8s-app: kubernetes-dashboard
    addonmanager.kubernetes.io/mode: Reconcile
  name: kubernetes-dashboard-minimal    #创建的资源名称
  namespace: kube-system
rules:                          #参数信息的传入
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
  resources: ["secrets"]
  resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
  verbs: ["get", "update", "delete"]
  # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
  resources: ["configmaps"]
  resourceNames: ["kubernetes-dashboard-settings"]
  verbs: ["get", "update"]
  # Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
  resources: ["services"]
  resourceNames: ["heapster"]
  verbs: ["proxy"]
- apiGroups: [""]
  resources: ["services/proxy"]
  resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
  verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    addonmanager.kubernetes.io/mode: Reconcile
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system        #名称空间的管理(默认为default)
  • ② dashboard-secret.yaml 安全
apiVersion: v1
kind: Secret		#角色
metadata:			#源信息
  labels:
    k8s-app: kubernetes-dashboard
    # Allows editing resource and makes sure it is created first.
    addonmanager.kubernetes.io/mode: EnsureExists
  name: kubernetes-dashboard-certs	#资源名称
  namespace: kube-system		#命名空间
type: Opaque
---						#--- 分段
apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
    # Allows editing resource and makes sure it is created first.
    addonmanager.kubernetes.io/mode: EnsureExists
  name: kubernetes-dashboard-key-holder		#密钥
  namespace: kube-system
type: Opaque
data:
  csrf: ""
  • ③ dashboard-configmap.yaml 配置管理
apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    k8s-app: kubernetes-dashboard
    # Allows editing resource and makes sure it is created first.
    addonmanager.kubernetes.io/mode: EnsureExists
  name: kubernetes-dashboard-settings
  namespace: kube-system
  • ④ dashboard-controller.yaml 控制器
apiVersion: v1
kind: ServiceAccount			#控制器名称(服务访问)
metadata:
  labels:
    k8s-app: kubernetes-dashboard
    addonmanager.kubernetes.io/mode: Reconcile
  name: kubernetes-dashboard
  namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment			#控制器名称
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
        seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
    spec:
      priorityClassName: system-cluster-critical			
      containers:			#资源指定的名称、镜像
      - name: kubernetes-dashboard
        image: siriuszg/kubernetes-dashboard-amd64:v1.8.3
        resources:			#设置了CPU和内存的上限
          limits:
            cpu: 100m
            memory: 300Mi
          requests:
            cpu: 50m
            memory: 100Mi
        ports:
        - containerPort: 8443	#8443提供对外的端口号(HTTPS协议)
          protocol: TCP
        args:
          # PLATFORM-SPECIFIC ARGS HERE
          - --auto-generate-certificates
        volumeMounts:			#容器卷
        - name: kubernetes-dashboard-certs
          mountPath: /certs
        - name: tmp-volume
          mountPath: /tmp
        livenessProbe:
          httpGet:
            scheme: HTTPS
            path: /
            port: 8443
          initialDelaySeconds: 30
          timeoutSeconds: 30
      volumes:
      - name: kubernetes-dashboard-certs
        secret:
          secretName: kubernetes-dashboard-certs
      - name: tmp-volume
        emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      tolerations:
      - key: "CriticalAddonsOnly"
        operator: "Exists"
  • ⑤ dashboard-service.yaml 服务
apiVersion: v1
kind: Service				#控制器名称
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  type: NodePort			#提供的形式(访问node节点提供出来的端口,即nodeport
  selector:
    k8s-app: kubernetes-dashboard
  ports:
  - port: 443				#内部提供
    targetPort: 8443			#Pod内部端口
    nodePort: 30001			#节点对外提供的端口(映射端口)
#如果外网需要访问这个资源,需要访问服务器IP:30001
#而提供此功能支持的是node节点上的kube-proxy
#而master是在后端的管理员,无法被用户访问
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐