前言

公司DDD架构的新项目采用云原生,K8s+Docker这方面的知识就必不可少了
因此,我背着同事偷偷的卷了卷,然后Docker考核成了公司唯一一个优秀,然后。。。后续就不多说了,总之一言难尽

本文采用Kubeadm的方式,进行高可用安装K8s集群。

温馨提示:仅供学习使用,生产环境需二进制方式部署~

享受花里胡哨模板阅读~点击

基本环境配置

节点规划

主机名IP地址说明
k8s-master01 ~ 03192.168.239.11 ~ 13master节点 * 3
k8s-master-lb192.168.239.226keepalived虚拟IP
k8s-node01 ~ 02192.168.239.21 ~ 23worker节点 * 2

网段规划及软件版本

配置信息备注
系统版本CentOS 7.9
Docker版本20.10.x
Pod网段172.16.0.0/12
Service网段192.168.0.0/16

基本配置

所有节点配置hosts,修改/etc/hosts如下:

192.168.239.11 k8s-master01
192.168.239.12 k8s-master02
192.168.239.13 k8s-master03
192.168.239.226 k8s-master-lb # 如果不是高可用集群,该IP为Master01的IP
192.168.239.21 k8s-node01
192.168.239.22 k8s-node02

yum源配置:

curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo

必备工具安装:

yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git -y

所有节点关闭防火墙、selinux、dnsmasq、swap:

温馨提示:NetworkManager要么配置好,要么关闭,不然当pod达到一定数量后导致宿主机不稳定(无法操作网卡)

systemctl disable --now firewalld 
systemctl disable --now dnsmasq
systemctl disable --now NetworkManager

setenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
swapoff -a && sysctl -w vm.swappiness=0
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab

时间同步:

温馨提示:时间不一致会导致证书出问题

rpm -ivh http://mirrors.wlnmp.com/centos/wlnmp-release-centos.noarch.rpm
yum install ntpdate -y
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'Asia/Shanghai' >/etc/timezone
ntpdate time2.aliyun.com
# 加入到crontab -e
*/5 * * * * /usr/sbin/ntpdate time2.aliyun.com

所有节点配置limit:

#临时
ulimit -SHn 65535

vim /etc/security/limits.conf
# 末尾添加如下内容 永久生效
* soft nofile 65536 
* hard nofile 131072
* soft nproc 65535
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited

免密钥配置:主机配置即可,再由主机转发到其他机器

ssh-keygen -t rsa
for i in k8s-master01 k8s-master02 k8s-master03 k8s-node01 k8s-node02;do ssh-copy-id -i .ssh/id_rsa.pub $i;done

下载安装所有的源码文件:

cd /root/ ; git clone https://gitee.com/dukuan/k8s-ha-install.git

所有节点升级系统并重启:

yum update -y --exclude=kernel* && reboot

内核升级配置

CentOS7 需要升级内核至4.18+,本案例用的4.19。且为离线操作,不然会下最新~

centOS7默认使用3.10的版本,在使用docker、k8s的时候,规模上来之后会引发一系列的问题,比如:默认磁盘不足,网络不同,宿主机挂掉了…

若选最新,而物理机太久,可能升级后导致物理机起不起来

下载内核:

cd /root
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm

从master01节点传到其他节点:

for i in k8s-master02 k8s-master03 k8s-node01 k8s-node02;do scp kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm $i:/root/ ; done

所有节点安装内核:

cd /root && yum localinstall -y kernel-ml*
# 所有节点更改内核启动顺序
grub2-set-default  0 && grub2-mkconfig -o /etc/grub2.cfg

grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
# 检查默认内核是不是4.19
[root@k8s-master02 ~]# grubby --default-kernel
/boot/vmlinuz-4.19.12-1.el7.elrepo.x86_64
# 所有节点重启,然后检查内核是不是4.19
[root@k8s-master02 ~]# uname -a
Linux k8s-master02 4.19.12-1.el7.elrepo.x86_64 #1 SMP Fri Dec 21 11:06:36 EST 2018 x86_64 x86_64 x86_64 GNU/Linux

所有节点安装ipvsadm(客户端工具,生产推荐使用,效率高于iptables):

yum install ipvsadm ipset sysstat conntrack libseccomp -y

所有节点配置ipvs模块,在内核4.19+版本nf_conntrack_ipv4已经改为nf_conntrack,4.18以下使用nf_conntrack_ipv4即可:

modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
vim /etc/modules-load.d/ipvs.conf 
# 加入以下内容
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
# 设置开机自动加载
systemctl enable --now systemd-modules-load.service

所有节点配置k8s内核:

cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
net.ipv4.conf.all.route_localnet = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720

net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF

## 应用
sysctl --system

重启服务器后,测试配置是否还在加载:

reboot
lsmod | grep --color=auto -e ip_vs -e nf_conntrack

K8s组件及Runtime安装

Containerd安装

containerd是什么就不多说了,kubernetes在1.22版本当中正式从kubelet当中移除docker-shim代码也是源于此。

温馨提示: Containerd可装可不装,如果用Docker做为容器,那么安装完docker然后启动,就可进入k8s组件安装。

查看版本

yum list docker-ce --showduplicates | sort -r

所有节点安装docker-ce-20.10:

yum install docker-ce-20.10.* docker-ce-cli-20.10.* -y

配置Containerd所需的模块(所有节点):

cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

所有节点加载模块:

# modprobe -- overlay
# modprobe -- br_netfilter

所有节点,配置Containerd所需的内核:

cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

所有节点加载内核:

sysctl --system

所有节点配置Containerd的配置文件:

mkdir -p /etc/containerd
containerd config default | tee /etc/containerd/config.toml

所有节点将Containerd的Cgroup改为Systemd:

vim /etc/containerd/config.toml
找到containerd.runtimes.runc.options,添加SystemdCgroup = true
所有节点将sandbox_image的Pause镜像改成符合自己版本的地址registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6:

所有节点启动Containerd,并配置开机自启动:

systemctl daemon-reload
systemctl enable --now containerd
docker info

所有节点配置crictl客户端连接的运行时位置:

cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF

K8s组件安装

yum list kubeadm.x86_64 --showduplicates | sort -r

由于官网未开放同步方式, 可能会有索引gpg检查失败的情况, 这时请用 yum install -y --nogpgcheck kubelet kubeadm kubectl 安装

# 所有节点安装1.23最新版本kubeadm、kubelet和kubectl:
yum install -y --nogpgcheck kubeadm-1.23* kubelet-1.23* kubectl-1.23*

更改Kubelet的配置使用Containerd作为Runtime:

cat >/etc/sysconfig/kubelet<<EOF
KUBELET_KUBEADM_ARGS="--container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock"
EOF

设置Kubelet开机自启动:

systemctl daemon-reload
systemctl enable --now kubelet

高可用实现

所有Master节点通过yum安装HAProxy和KeepAlived:

yum install keepalived haproxy -y

所有Master节点配置HAProxy:

[root@k8s-master01 etc]# mkdir /etc/haproxy
[root@k8s-master01 etc]# vim /etc/haproxy/haproxy.cfg 
global
  maxconn  2000
  ulimit-n  16384
  log  127.0.0.1 local0 err
  stats timeout 30s

defaults
  log global
  mode  http
  option  httplog
  timeout connect 5000
  timeout client  50000
  timeout server  50000
  timeout http-request 15s
  timeout http-keep-alive 15s

frontend monitor-in
  bind *:33305
  mode http
  option httplog
  monitor-uri /monitor

frontend k8s-master
  bind 0.0.0.0:16443
  bind 127.0.0.1:16443
  mode tcp
  option tcplog
  tcp-request inspect-delay 5s
  default_backend k8s-master

backend k8s-master
  mode tcp
  option tcplog
  option tcp-check
  balance roundrobin
  default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
  server k8s-master01	192.168.239.11:6443  check
  server k8s-master02	192.168.239.12:6443  check
  server k8s-master03	192.168.239.13:6443  check

所有Master节点配置KeepAlived,

Master01节点的配置:
[root@k8s-master01 etc]# mkdir /etc/keepalived

[root@k8s-master01 ~]# vim /etc/keepalived/keepalived.conf 
! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
script_user root
    enable_script_security
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5
    weight -5
    fall 2  
rise 1
}
vrrp_instance VI_1 {
    state MASTER
    interface ens160
    mcast_src_ip 192.168.239.11
    virtual_router_id 51
    priority 101
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.239.226
    }
    track_script {
       chk_apiserver
    }
}
Master02节点的配置:
! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
script_user root
    enable_script_security
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
   interval 5
    weight -5
    fall 2  
rise 1
}
vrrp_instance VI_1 {
    state BACKUP
    interface ens160
    mcast_src_ip 192.168.239.12
    virtual_router_id 51
    priority 100
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.239.226
    }
    track_script {
       chk_apiserver
    }
}
Master03节点的配置:
! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
script_user root
    enable_script_security
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
 interval 5
    weight -5
    fall 2  
rise 1
}
vrrp_instance VI_1 {
    state BACKUP
    interface ens160
    mcast_src_ip 192.168.239.13
    virtual_router_id 51
    priority 100
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.239.226
    }
    track_script {
       chk_apiserver
    }
}

所有master节点配置KeepAlived健康检查文件:

[root@k8s-master01 keepalived]# cat /etc/keepalived/check_apiserver.sh 
#!/bin/bash

err=0
for k in $(seq 1 3)
do
    check_code=$(pgrep haproxy)
    if [[ $check_code == "" ]]; then
        err=$(expr $err + 1)
        sleep 1
        continue
    else
        err=0
        break
    fi
done

if [[ $err != "0" ]]; then
    echo "systemctl stop keepalived"
    /usr/bin/systemctl stop keepalived
    exit 1
else
    exit 0
fi

chmod +x /etc/keepalived/check_apiserver.sh

启动haproxy和keepalived:

[root@k8s-master01 keepalived]# systemctl daemon-reload
[root@k8s-master01 keepalived]# systemctl enable --now haproxy
[root@k8s-master01 keepalived]# systemctl enable --now keepalived

测试VIP

[root@k8s-master01 ~]# ping 192.168.239.226 -c 4
PING 192.168.0.236 (192.168.0.236) 56(84) bytes of data.
64 bytes from 192.168.0.236: icmp_seq=1 ttl=64 time=0.464 ms
64 bytes from 192.168.0.236: icmp_seq=2 ttl=64 time=0.063 ms
64 bytes from 192.168.0.236: icmp_seq=3 ttl=64 time=0.062 ms
64 bytes from 192.168.0.236: icmp_seq=4 ttl=64 time=0.063 ms

集群初始化

Master01初始化

Master01节点创建kubeadm-config.yaml配置文件如下:

温馨提示:如果不是高可用集群,192.168.239.226:16443改为master01的地址,16443改为apiserver的端口,默认6643。

vim kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: 7t2weq.bjbawausm0jaxury
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.239.11
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  name: k8s-master01
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  certSANs:
  - 192.168.239.226
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 192.168.239.226:16443
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.23.1 # 更改此处的版本号和kubeadm version一致
networking:
  dnsDomain: cluster.local
  podSubnet: 172.16.0.0/12
  serviceSubnet: 192.168.0.0/16
scheduler: {}

更新kubeadm文件:

kubeadm config migrate --old-config kubeadm-config.yaml --new-config new.yaml

将kubeadm-config.yaml文件复制到其他master节点:

for i in k8s-master02 k8s-master03; do scp new.yaml $i:/root/; done

所有Master节点提前下载镜像:

kubeadm config images pull --config /root/new.yaml 

所有节点设置开机自启动kubelet

systemctl enable --now kubelet(如果启动失败无需管理,初始化成功以后即可启动)

Master01节点初始化,初始化以后会在/etc/kubernetes目录下生成对应的证书和配置文件,之后其他Master节点加入Master01即可:

kubeadm init --config /root/new.yaml  --upload-certs

初始化成功以后,会产生Token值,用于其他节点加入时使用,因此要记录下初始化成功生成的token值(令牌值):

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 192.168.239.226:16443 --token 7t2weq.bjbawausm0jaxury \
	--discovery-token-ca-cert-hash sha256:fccf02f857eaf8082c9a5aaeacaf85646f85a6ae8c25d3af4f557af6ce25e139 \
	--control-plane --certificate-key a38aec9fc224588f060e5f0e15b07362600390e32c4abb6c963f38dec18f4fdf

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.239.226:16443 --token 7t2weq.bjbawausm0jaxury \
	--discovery-token-ca-cert-hash sha256:fccf02f857eaf8082c9a5aaeacaf85646f85a6ae8c25d3af4f557af6ce25e139 

Master01节点配置环境变量(有就行不限节点),用于访问Kubernetes集群:

cat <<EOF >> /root/.bashrc
export KUBECONFIG=/etc/kubernetes/admin.conf
EOF
source /root/.bashrc

查看节点状态:

 [root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS     ROLES                  AGE   VERSION
k8s-master01   NotReady   control-plane,master   74s   v1.20.0

采用初始化安装方式,所有的系统组件均以容器的方式运行并且在kube-system命名空间内,此时可以查看Pod状态:

[root@k8s-master01 ~]# kubectl get pods -n kube-system -o wide
NAME                                   READY     STATUS    RESTARTS   AGE       IP              NODE
coredns-777d78ff6f-kstsz               0/1       Pending   0          14m       <none>          <none>
coredns-777d78ff6f-rlfr5               0/1       Pending   0          14m       <none>          <none>
etcd-k8s-master01                      1/1       Running   0          14m       192.168.0.201   k8s-master01
kube-apiserver-k8s-master01            1/1       Running   0          13m       192.168.0.201   k8s-master01
kube-controller-manager-k8s-master01   1/1       Running   0          13m       192.168.0.201   k8s-master01
kube-proxy-8d4qc                       1/1       Running   0          14m       192.168.0.201   k8s-master01
kube-scheduler-k8s-master01            1/1       Running   0          13m       192.168.0.201   k8s-master01

添加Master节点

使用上述初始化命令生产的join命令添加即可:

kubeadm join 192.168.239.226:16443 --token 7t2weq.bjbawausm0jaxury \
	--discovery-token-ca-cert-hash sha256:fccf02f857eaf8082c9a5aaeacaf85646f85a6ae8c25d3af4f557af6ce25e139 \
	--control-plane --certificate-key a38aec9fc224588f060e5f0e15b07362600390e32c4abb6c963f38dec18f4fdf

Token过期后生成新的token:

kubeadm token create --print-join-command

Master需要生成–certificate-key

kubeadm init phase upload-certs --upload-certs

查看token

kubectl get secret -n kube-system
#token过期时间
kubectl get secret -n kube-system bootstrap-token-7t2weq -oyaml

base64解密

echo "MjAyMi0wNC0yN1QwNzoyOTowM1o=" | base64 -d

添加Worker节点

kubeadm join 192.168.239.226:16443 --token 7t2weq.bjbawausm0jaxury \
	--discovery-token-ca-cert-hash sha256:fccf02f857eaf8082c9a5aaeacaf85646f85a6ae8c25d3af4f557af6ce25e139

添加完成后结果如下:节点状态

CNI插件Calico安装

只在master01执行:

cd /root/k8s-ha-install && git checkout manual-installation-v1.23.x && cd calico/

修改Pod网段:

POD_SUBNET=`cat /etc/kubernetes/manifests/kube-controller-manager.yaml | grep cluster-cidr= | awk -F= '{print $NF}'`

sed -i "s#POD_CIDR#${POD_SUBNET}#g" calico.yaml
kubectl apply -f calico.yaml

创建完成后,等待几分钟后查看状态:
查看容器和节点状态

Metrics Server部署

安装metrics server:

cd /root/k8s-ha-install/kubeadm-metrics-server

# kubectl  create -f comp.yaml 
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created

查看Metrics Server状态:
MetricsServer状态
状态正常后,查看度量指标:

# kubectl top node
NAME           CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8s-master01   153m         3%     1701Mi          44%       
k8s-master02   125m         3%     1693Mi          44%       
k8s-master03   129m         3%     1590Mi          41%       
k8s-node01     73m          1%     989Mi           25%       
k8s-node02     64m          1%     950Mi           24%       
# kubectl top po -A
NAMESPACE     NAME                                       CPU(cores)   MEMORY(bytes)   
kube-system   calico-kube-controllers-66686fdb54-74xkg   2m           17Mi            
kube-system   calico-node-6gqpb                          21m          85Mi            
kube-system   calico-node-bmvjt                          29m          76Mi            
kube-system   calico-node-hdp9c                          15m          82Mi            
kube-system   calico-node-wwrfv                          23m          86Mi            
kube-system   calico-node-zzv88                          22m          84Mi            
kube-system   calico-typha-67c6dc57d6-hj6l4              2m           23Mi            
kube-system   calico-typha-67c6dc57d6-jm855              2m           22Mi            
kube-system   coredns-7d89d9b6b8-sr6mf                   1m           16Mi            
kube-system   coredns-7d89d9b6b8-xqwjk                   1m           16Mi            
kube-system   etcd-k8s-master01                          24m          96Mi            
kube-system   etcd-k8s-master02                          20m          91Mi            
kube-system   etcd-k8s-master03                          21m          92Mi            
kube-system   kube-apiserver-k8s-master01                41m          502Mi           
kube-system   kube-apiserver-k8s-master02                35m          476Mi           
kube-system   kube-apiserver-k8s-master03                71m          480Mi           
kube-system   kube-controller-manager-k8s-master01       15m          65Mi            
kube-system   kube-controller-manager-k8s-master02       1m           26Mi            
kube-system   kube-controller-manager-k8s-master03       2m           27Mi            
kube-system   kube-proxy-8lt45                           1m           18Mi            
kube-system   kube-proxy-d6jfh                           1m           18Mi            
kube-system   kube-proxy-hfnvz                           1m           19Mi            
kube-system   kube-proxy-nsms8                           1m           18Mi            
kube-system   kube-proxy-xmlhq                           3m           21Mi            
kube-system   kube-scheduler-k8s-master01                2m           26Mi            
kube-system   kube-scheduler-k8s-master02                2m           24Mi            
kube-system   kube-scheduler-k8s-master03                2m           24Mi            
kube-system   metrics-server-d54b585c4-4dqpf             46m          16Mi

Dashboard部署

安装

cd /root/k8s-ha-install/dashboard/

#查看版本
grep "image" dashboard.yaml

#安装
[root@k8s-master01 dashboard]# kubectl  create -f .
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

登录Dashboard

查看dashboard端口号:

# kubectl get svc kubernetes-dashboard -n kubernetes-dashboard

查看管理员Token:

[root@k8s-master01 1.1.1]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Name:         admin-user-token-r4vcp
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: 2112796c-1c9e-11e9-91ab-000c298bf023

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXI0dmNwIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIyMTEyNzk2Yy0xYzllLTExZTktOTFhYi0wMDBjMjk4YmYwMjMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.bWYmwgRb-90ydQmyjkbjJjFt8CdO8u6zxVZh-19rdlL_T-n35nKyQIN7hCtNAt46u6gfJ5XXefC9HsGNBHtvo_Ve6oF7EXhU772aLAbXWkU1xOwQTQynixaypbRIas_kiO2MHHxXfeeL_yYZRrgtatsDBxcBRg-nUQv4TahzaGSyK42E_4YGpLa3X3Jc4t1z0SQXge7lrwlj8ysmqgO4ndlFjwPfvg0eoYqu9Qsc5Q7tazzFf9mVKMmcS1ppPutdyqNYWL62P1prw_wclP0TezW1CsypjWSVT4AuJU8YmH8nTNR1EXn8mJURLSjINv6YbZpnhBIPgUGk1JYVLcn47w

查看网络情况

netstat -lntp

image-20220426173425328

通过任意宿主机+端口号即可访问Dashboard:
访问

登录

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐