前言

一:k8s二进制方式多节点部署

  • 要先部署单节点集群,可查阅博客https://blog.csdn.net/LPFAM/article/details/10887471

1.1:环境介绍

  • 下面拓扑图还有一个harbor仓库没有说明,到时候部署在单独的一台服务器上即可

  • 之前部署过单节点master node1 和 node2,参考博客https://blog.csdn.net/LPFAM/article/details/108874717

  • 接着部署其他部分

  • mark

  • 主机分配

  • 主机名IP地址资源分配部署的服务
    nginx01192.168.100.1502G+4CPUnginx、keepalived
    nginx02192.168.100.1402G+4CPUnginx、keepalived
    VIP192.168.100.100
    master192.168.100.1601G+2CPUapiserver、scheduler、controller-manager、etcd
    master02192.168.100.1701G+2CPUapiserver、scheduler、controller-manager
    node01192.168.100.1802G+4CPUkubelet、kube-proxy、docker、flannel、etcd
    node02192.168.100.1902G+4CPUkubelet、kube-proxy、docker、flannel、etcd

1.2:master02节点操作

  • 前提已经完成部署二进制单节点群集master node1 node2

  • 开局优化

    关闭防火墙,关闭核心防护,关闭网络管理功能(生成环境中一定要关闭它)

[root@localhost ~]# hostnamectl set-hostname master02
[root@localhost ~]# su
[root@master02 ~]# systemctl stop firewalld && systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@master02 ~]# setenforce 0 && sed -i "s/SELINUX=enforcing/SELNIUX=disabled/g" /etc/selinux/config
[root@master02 ~]# systemctl stop NetworkManager && systemctl disable NetworkManager
Removed symlink /etc/systemd/system/multi-user.target.wants/NetworkManager.service.
Removed symlink /etc/systemd/system/dbus-org.freedesktop.NetworkManager.service.
Removed symlink /etc/systemd/system/dbus-org.freedesktop.nm-dispatcher.service.
Removed symlink /etc/systemd/system/network-online.target.wants/NetworkManager-wait-online.service.

  • master节点操作,将master节点的kubernetes配置文件和启动脚本复制到master02节点
[root@master bin]# scp -r /opt/kubernetes/ root@192.168.100.160:/opt/
The authenticity of host '192.168.100.160 (192.168.100.160)' can't be established.
ECDSA key fingerprint is SHA256:TMzdtoj+IhgDyqNAKSTa1eGs7zd4wkaVTMgMzz3nFk4.
ECDSA key fingerprint is MD5:ba:57:09:36:e9:07:fa:ee:5f:81:72:59:b2:c9:39:3e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.100.160' (ECDSA) to the list of known hosts.
root@192.168.100.160's password: 
kube-apiserver                               100%  939   510.7KB/s   00:00    
token.csv                                    100%   84    99.9KB/s   00:00    
kube-scheduler                               100%   94   150.9KB/s   00:00    
kube-controller-manager                      100%  483   598.3KB/s   00:00    
kube-apiserver                               100%  184MB 120.8MB/s   00:01    
kubectl                                      100%   55MB 117.4MB/s   00:00    
kube-controller-manager                      100%  155MB 117.9MB/s   00:01    
kube-scheduler                               100%   55MB 117.6MB/s   00:00    
ca-key.pem                                   100% 1679   913.8KB/s   00:00    
ca.pem                                       100% 1359     1.3MB/s   00:00    
server-key.pem                               100% 1675     2.1MB/s   00:00    
server.pem                                   100% 1643     1.8MB/s   00:00    
[root@master bin]# scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service root@192.168.100.160:/usr/lib/systemd/system/
root@192.168.100.160's password: 
kube-apiserver.service                       100%  282   145.8KB/s   00:00    
kube-controller-manager.service              100%  317   316.7KB/s   00:00    
kube-scheduler.service                       100%  281    45.9KB/s   00:00    

  • master02上修改apiserver配置文件中的IP地址
[root@master02 ~]# cd /opt/kubernetes/cfg/
[root@master02 cfg]# ls
kube-apiserver  kube-controller-manager  kube-scheduler  token.csv
[root@master02 cfg]# vim kube-apiserver

KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.100.170:2379,https://192.168.100.180:2379,https://192.168.100.190:2379 \
--bind-address=192.168.100.160 \
--secure-port=6443 \
--advertise-address=192.168.100.160 \
--allow-privileged=true \
  • 将master节点的etcd证书复制到master02节点(master02上一定要有etcd证书,用来与etcd通信)
[root@master bin]# scp -r /opt/etcd/ root@192.168.100.160:/opt
root@192.168.100.160's password: 
etcd                                         100%  523   177.8KB/s   00:00    
etcd                                         100%   18MB 124.7MB/s   00:00    
etcdctl                                      100%   15MB 114.5MB/s   00:00    
ca-key.pem                                   100% 1675     1.0MB/s   00:00    
ca.pem                                       100% 1265   448.5KB/s   00:00    
server-key.pem                               100% 1675     1.6MB/s   00:00    
server.pem                                   100% 1338   615.8KB/s   00:00  
  • master02节点查看etcd证书,并启动三个服务
[root@master02 cfg]# tree /opt/etcd
/opt/etcd
├── bin
│   ├── etcd
│   └── etcdctl
├── cfg
│   └── etcd
└── ssl
    ├── ca-key.pem
    ├── ca.pem
    ├── server-key.pem
    └── server.pem

3 directories, 7 files
[root@master02 ~]# systemctl start kube-apiserver.service
[root@master02 ~]# systemctl status kube-apiserver.service
[root@master02 ~]# systemctl enable kube-apiserver.service
[root@master02 ~]# systemctl start kube-controller-manager.service
[root@master02 ~]# systemctl status kube-controller-manager.service
[root@master02 ~]# systemctl enable kube-controller-manager.service
[root@master02 ~]# systemctl enable kube-scheduler.service
[root@master02 ~]# systemctl start kube-scheduler.service
[root@master02 ~]# systemctl status kube-scheduler.service
  • 添加环境变量并查看状态
[root@master02 ~]# echo export PATH=$PATH:/opt/kubernetes/bin >> /etc/profile
[root@master02 ~]# source /etc/profile
[root@master02 ~]# kubectl get node
NAME              STATUS   ROLES    AGE   VERSION
192.168.100.180   Ready    <none>   10h   v1.12.3
192.168.100.190   Ready    <none>   9h    v1.12.3

1.2:nginx负载均衡集群部署

  • 两个nginx主机开局优化(仅展示nginx01的操作):关闭防火墙和核心防护,编辑nginx yum源
[root@localhost ~]# hostnamectl set-hostname nginx01	'//修改主机吗'
[root@localhost ~]# su
[root@nginx01 ~]#  
[root@nginx01 ~]# systemctl stop firewalld && systemctl disable firewalld	'//关闭防火墙与核心防护'
[root@nginx01 ~]# setenforce 0 && sed -i "s/SELINUX=enforcing/SELNIUX=disabled/g" /etc/selinux/config	
[root@nginx01 ~]# vi /etc/yum.repos.d/nginx.repo 	'//编辑nginx的yum源'
[nginx]
name=nginx.repo
baseurl=http://nginx.org/packages/centos/7/$basearch/
enabled=1
gpgcheck=0
[root@nginx01 ~]# yum clean all
[root@nginx01 ~]# yum makecache

  • 两台nginx主机安装nginx并开启四层转发(仅展示nginx01的操作)
[root@nginx01 ~]# yum -y install nginx	'//安装nginx'
[root@nginx01 ~]# vi /etc/nginx/nginx.conf 
...省略内容
 13  stream {
 14 
 15     log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
 16      access_log  /var/log/nginx/k8s-access.log  main;        ##指定日志目录
 17 
 18      upstream k8s-apiserver {
 19  #此处为master的ip地址和端口 
 20          server 192.168.100.170:6443;	'//6443是apiserver的端口号'
 21  #此处为master02的ip地址和端口
 22          server 192.168.100.160:6443;
 23      }
 24      server {
 25                  listen 6443;
 26                  proxy_pass k8s-apiserver;
 27      }
 28      }
。。。省略内容
[root@nginx01 ~]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

mark

  • 启动nginx服务
[root@nginx01 ~]# systemctl start nginx
[root@nginx01 ~]# systemctl enable nginx
Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service.
[root@nginx01 ~]# netstat -ntap | grep nginx
tcp        0      0 0.0.0.0:6443            0.0.0.0:*               LISTEN      66053/nginx: master 
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      66053/nginx: master 
  • 两台nginx主机部署keepalived服务(仅展示nginx01的操作)
[root@nginx01 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_MASTER
}

vrrp_script check_nginx {
    script "/usr/local/nginx/sbin/check_nginx.sh"
}
vrrp_instance VI_1 {
    state MASTER		"backup修改为BACKUP"
    interface ens33
    virtual_router_id 51  "master和backup这里都写51,vrrp路由ID实例,每个实例是唯一的"
    priority 100			"backup改优先级为90"
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.100.100/24
    }
    track_script {
        check_nginx
    }
}

  • 创建监控脚本,启动keepalived服务,查看VIP地址
[root@nginx01 ~]# mkdir -p /usr/local/nginx/sbin/	'//创建监控脚本目录'
[root@nginx01 ~]# vim /usr/local/nginx/sbin/check_nginx.sh	'//编写监控脚本配置文件'
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")

if [ "$count" -eq 0 ];then
    systemctl stop keepalived
fi
[root@nginx01 ~]# chmod +x /usr/local/nginx/sbin/check_nginx.sh	'//给权限'
[root@nginx01 ~]# systemctl start keepalived	'//开启服务'
[root@nginx01 ~]# systemctl status keepalived
[root@nginx01 ~]# ip a	'//两个nginx服务器查看IP地址'
    VIP在nginx01上
[root@nginx02 ~]# ip a

mark

mark

  • 验证漂移地址
[root@nginx01 ~]# pkill nginx	'//关闭nginx服务'
[root@nginx01 ~]# systemctl status keepalived	'//发现keepalived服务关闭了'
[root@nginx02 ~]# ip a	'//现在发现VIP地址跑到nginx02上了'

mark

  • 恢复漂移地址的操作
[root@nginx01 ~]# systemctl start nginx
[root@nginx01 ~]# systemctl start keepalived	'//先开启nginx,在启动keepalived服务'
[root@nginx01 ~]# ip a	'//再次查看,发现VIP回到了nginx01节点上'

mark

mark

1.3: node节点指向 Nginx高可用群集

1、修改 两个node节点的配置文件,server ip 地址为统一的VIP地址(三个文件)这里如果不修改为Vip,实际上后端节点指的都是主master的api-server地址,这样就形成了master2资源浪费,同时当主master宕机,从master2无法提供服务,也无法形成两个master的高可用,所以这里一定要有负载均衡,实现高可用.

//修改内容:server: https://192.168.100.100:6443(都改成vip地址)

[root@node1 cfg]# vim /opt/kubernetes/cfg/bootstrap.kubeconfig

[root@node1 cfg]# vim /opt/kubernetes/cfg/kubelet.kubeconfig

[root@node1 cfg]# vim /opt/kubernetes/cfg/kube-proxy.kubeconfig

//重启服务
[root@node1 cfg]# systemctl restart kubelet.service 

[root@node1 cfg]# systemctl restart kube-proxy.service

mark

2、检查修改内容

//确保必须在此路径下 grep 检查
[root@localhost ~]# cd /opt/kubernetes/cfg/
[root@localhost cfg]#  grep 100 *

mark

mark

3、接下来在 nginx1 上查看 nginx 的 k8s日志看是否有node访问vip:

[root@nginx01 ~]# tail /var/log/nginx/k8s-access.log
192.168.100.190 192.168.100.170:6443 - [29/Sep/2020:23:12:33 +0800] 200 1122
192.168.100.190 192.168.100.160:6443 - [29/Sep/2020:23:12:33 +0800] 200 1121
'//这里的日志是重启服务的时候产生的'

做了负载均衡之后,访问流量都在负载均衡器上,大大缓解了master的压力

1.4: k8s多节点集群测试

  • 在 master1上操作,创建 Pod进行测试
[root@master bin]# ls
kube-apiserver  kube-controller-manager  kubectl  kube-scheduler
[root@master bin]# ./kubectl run nginx --image=nginx
kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
deployment.apps/nginx created
[root@master bin]# pwd
/opt/kubernetes/bin
  • 查看pod状态
[root@master bin]# ./kubectl get pods
NAME                     READY   STATUS             RESTARTS   AGE
nginx2-cc5f746cb-fjgrk   1/1     Running            0          2m46s
  • 绑定群集中的匿名用户赋予管理员权限(解决日志不可看问题)
[root@master bin]# ./kubectl logs nginx2-cc5f746cb-fjgrk
Error from server (Forbidden): Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy) ( pods/log nginx2-cc5f746cb-fjgrk)
#出现 error 是由于权限不足,下面来授权解决一下这个问题。
解决办法(添加匿名用户授予权限):
[root@master bin]# ./kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous
clusterrolebinding.rbac.authorization.k8s.io/cluster-system-anonymous created
[root@master bin]# ./kubectl logs nginx2-cc5f746cb-fjgrk "没有信息,但不会报错"
  • 查看 Pod 网络
[root@master bin]# ./kubectl get pods -o wide
NAME                     READY   STATUS             RESTARTS   AGE   IP            NODE              NOMINATED NODE
nginx2-cc5f746cb-fjgrk   1/1     Running            0          53s   172.17.71.3   192.168.100.190   <none>
# -o wide可以显示Pod创建在哪个node节点,这里创建在190
#可以看出,这个在master1上创建的 pod 被分配到了node01上了
#我们可以在对应网络的node节点上操作就可以直接访问。

mark

  • 我们在node节点之间部署了 flannel网络组件,实现node节点互通。所以在node1和node2的浏览器上访问这个地址:172.17.71.3

mark

mark

二: 搭建k8s的Dashboard

master1上的操作

  • 在dashboard官网中下载,安装web界面所需要的yaml文件,网址如下:

  • https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dashboard

  • 前提

    Kuboard 是基于一款基于 Kubernetes 的微服务管理面板。安装 Kuboard 时,假设您已经有一个 Kubernetes 集群,如果没有集群,请参考上一章先安装k8s集群。

  • 1.安装kuboards

[root@master ~]# kubectl apply -f https://kuboard.cn/install-script/kuboard.yaml
deployment.apps/kuboard created
service/kuboard created
serviceaccount/kuboard-user created
clusterrolebinding.rbac.authorization.k8s.io/kuboard-user created
serviceaccount/kuboard-viewer created
clusterrolebinding.rbac.authorization.k8s.io/kuboard-viewer created
[root@master ~]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep kuboard-user | awk '{print $1}')
Name:         kuboard-user-token-99c7z
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: kuboard-user
              kubernetes.io/service-account.uid: f51570ce-02bd-11eb-b567-000c29a0cac9

Type:  kubernetes.io/service-account-token

Data
====
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJvYXJkLXVzZXItdG9rZW4tOTljN3oiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoia3Vib2FyZC11c2VyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZjUxNTcwY2UtMDJiZC0xMWViLWI1NjctMDAwYzI5YTBjYWM5Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmt1Ym9hcmQtdXNlciJ9.UrnTxeX5_DKQyv5NM_w9SylMJSFe9kzE5Ae4xZtkE3OZr1tNNAotamv4AO9wix0_RijCLOtrpgz7FAEBYkdXDsq9qNosmhZ03sof6oglZkQYQIDF9ghL9twJmJB4tynHbiNYlHc1f0_A-gvWjp3oCwsMYzYV5zHtOd0xXUz-PfymhVN-56nJUUtkPSQZwVhXVSR1ala-CGayDwg5KTouoSJcWYGJImtDP6hx7kvrK_X6DTZwNSxIHJn1K6YnrSHeMabxX2fyR2WHathZttZ4qr58VXQmGimYa6SOIu-uyDHahVJizEBC_Idc4QDmkfDM4XhprvkGmUCjTeiRuNPHZA
ca.crt:     1359 bytes
[root@master ~]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep kuboard-user | awk '{print $1}')
Name:         kuboard-user-token-99c7z
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: kuboard-user
              kubernetes.io/service-account.uid: f51570ce-02bd-11eb-b567-000c29a0cac9

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1359 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJvYXJkLXVzZXItdG9rZW4tOTljN3oiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoia3Vib2FyZC11c2VyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZjUxNTcwY2UtMDJiZC0xMWViLWI1NjctMDAwYzI5YTBjYWM5Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmt1Ym9hcmQtdXNlciJ9.UrnTxeX5_DKQyv5NM_w9SylMJSFe9kzE5Ae4xZtkE3OZr1tNNAotamv4AO9wix0_RijCLOtrpgz7FAEBYkdXDsq9qNosmhZ03sof6oglZkQYQIDF9ghL9twJmJB4tynHbiNYlHc1f0_A-gvWjp3oCwsMYzYV5zHtOd0xXUz-PfymhVN-56nJUUtkPSQZwVhXVSR1ala-CGayDwg5KTouoSJcWYGJImtDP6hx7kvrK_X6DTZwNSxIHJn1K6YnrSHeMabxX2fyR2WHathZttZ4qr58VXQmGimYa6SOIu-uyDHahVJizEBC_Idc4QDmkfDM4XhprvkGmUCjTeiRuNPHZA
[root@master ~]# kubectl get svc -n kube-system
NAME      TYPE       CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
kuboard   NodePort   10.0.0.185   <none>        80:32567/TCP   8m50s
[root@master ~]# kubectl get pods
NAME                      READY   STATUS    RESTARTS   AGE
nginx-dbddb74b8-r7z7j     1/1     Running   0          10h
nginx2-cc5f746cb-fjgrk    1/1     Running   0          10h
nginx3-674f7cffbd-qc4wj   1/1     Running   0          10h

  • 2.获取token

kuboard的登录需要一个token,按照权限的不同可以获取管理员和只读用户的token,分别执行以下命令获取:

管理员获取token

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep kuboard-user | awk '{print $1}')

普通用户获取token

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep kuboard-viewer | awk '{print $1}')

[root@master ~]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep kuboard-user | awk '{print $1}')
Name:         kuboard-user-token-99c7z
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: kuboard-user
              kubernetes.io/service-account.uid: f51570ce-02bd-11eb-b567-000c29a0cac9

Type:  kubernetes.io/service-account-token

Data
====
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJvYXJkLXVzZXItdG9rZW4tOTljN3oiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoia3Vib2FyZC11c2VyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZjUxNTcwY2UtMDJiZC0xMWViLWI1NjctMDAwYzI5YTBjYWM5Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmt1Ym9hcmQtdXNlciJ9.UrnTxeX5_DKQyv5NM_w9SylMJSFe9kzE5Ae4xZtkE3OZr1tNNAotamv4AO9wix0_RijCLOtrpgz7FAEBYkdXDsq9qNosmhZ03sof6oglZkQYQIDF9ghL9twJmJB4tynHbiNYlHc1f0_A-gvWjp3oCwsMYzYV5zHtOd0xXUz-PfymhVN-56nJUUtkPSQZwVhXVSR1ala-CGayDwg5KTouoSJcWYGJImtDP6hx7kvrK_X6DTZwNSxIHJn1K6YnrSHeMabxX2fyR2WHathZttZ4qr58VXQmGimYa6SOIu-uyDHahVJizEBC_Idc4QDmkfDM4XhprvkGmUCjTeiRuNPHZA
ca.crt:     1359 bytes
  • 3.访问Kuboard

使用如下命令获取kuboard暴露的端口号:

kubectl get svc -n kube-system

[root@master ~]# kubectl get svc -n kube-system
NAME      TYPE       CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
kuboard   NodePort   10.0.0.185   <none>        80:32567/TCP   8m50s
[root@master ~]# kubectl get pods
NAME                      READY   STATUS    RESTARTS   AGE
nginx-dbddb74b8-r7z7j     1/1     Running   0          10h
nginx2-cc5f746cb-fjgrk    1/1     Running   0          10h
nginx3-674f7cffbd-qc4wj   1/1     Running   0          10h

输出的最后一行,可以看到kuboard服务对外暴露的端口号为 32567,使用任意一个worker节点的 ip+此端口 即可访问kuboard.

注意:如果使用的云服务器,请保证端口32567是开放的。

mark

mark

mark

mark

mark

mark

mark

注意:实验中只能用一个node节点登陆K8S

三: 验证弹性伸缩

[root@master ~]# kubectl run httpd --image=httpd
kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
deployment.apps/httpd created

mark

mark

[root@node1 ~]# docker stop a9078e2e3656
a9078e2e3656

mark

mark

mark

mark

  • 添加副本,会发现生成副本很慢处于pending状态,等一会就会发现有四个副本

mark

  • 排障: 根据/var/log/messages 可以查看到报错信息,根据信息排障
    mark
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐