目录

一、集群规划

二、基础环境配置

1、配置/etc/hosts文件

2、设置主机名

3、安装yum源(Centos7)

4、必备工具安装

5、所有节点关闭firewalld 、dnsmasq、selinux

6、关闭swap分区

7、所有节点同步时间

8、所有节点配置limit

9、Master01节点免密钥登录其他节点

10、Master01下载安装文件

三、内核升级

1、创建文件夹(所有节点)

2、在master01节点下载内核

3、从master01节点传到其他节点

4、所有节点安装内核

5、所有节点更改内核启动顺序

6、检查默认内核是不是4.19

7、所有节点重启,然后检查内核是不是4.19

8、所有节点安装配置ipvsadm

9、所有节点配置ipvs模块

10、检查是否加载

11、开启一些k8s集群中必须的内核参数,所有节点配置k8s内核

12、所有节点配置完内核后,重启服务器,保证重启后内核依旧加载

四、Runtime安装

1、Containerd作为Runtime

1.1 所有节点安装docker-ce-20.10

1.2 首先配置Containerd所需的模块(所有节点)

1.3 所有节点加载模块

1.4 所有节点,配置Containerd所需的内核

1.5 所有节点加载内核

1.6 所有节点配置Containerd的配置文件

1.7 所有节点将Containerd的Cgroup改为Systemd

1.8 所有节点将sandbox_image的Pause镜像改成符合自己版本的地址

1.9 所有节点启动Containerd,并配置开机自启动

1.10 所有节点配置crictl客户端连接的运行时位置

2、Docker作为Runtime

2.1 所有节点安装docker-ce 20.10:

2.2 由于新版Kubelet建议使用systemd,所以把Docker的CgroupDriver也改成systemd

2.3 所有节点设置开机自启动Docker

五、安装Kubernetes组件

1、首先在Master01节点查看最新的Kubernetes版本是多少

2、所有节点安装1.23.10最新版本kubeadm、kubelet和kubectl

3、如果选择的是Containerd作为的Runtime,需要更改Kubelet的配置使用Containerd作为Runtime

4、所有节点设置Kubelet开机自启动

六、高可用组件安装

1、所有Master节点通过yum安装HAProxy和KeepAlived

2、所有Master节点配置HAProxy(所有Master节点的HAProxy配置相同)

3、所有Master节点配置KeepAlived

4、所有master节点配置KeepAlived健康检查文件

5、启动haproxy和keepalived

6、重要:如果安装了keepalived和haproxy,需要测试keepalived是否是正常的

七、集群初始化

1、Master01节点创建kubeadm-config.yaml配置文件如下

2、更新kubeadm文件

3、将new.yaml文件复制到其他master节点

4、之后所有Master节点提前下载镜像,可以节省初始化时间(其他节点不需要更改任何配置,包括IP地址也不需要更改)

5、所有节点设置开机自启动kubelet

6、Master01节点初始化,初始化以后会在/etc/kubernetes目录下生成对应的证书和配置文件,之后其他Master节点加入Master01即可

7、Master01节点配置环境变量,用于访问Kubernetes集群

8、查看节点状态

9、采用初始化安装方式,所有的系统组件均以容器的方式运行并且在kube-system命名空间内,此时可以查看Pod状态

八、高可用Master

1、Token过期后生成新的token

2、Master需要生成--certificate-key

3、其他master加入集群,master02、master03执行

4、查看当前状态

九、Node节点的配置

1、将node节点加入集群中

2、所有节点初始化完成后,查看集群状态

十、Calico组件的安装

1、以下步骤只在master01执行,切换分支

2、修改Pod网段

3、部署calico

4、查看容器和节点状态

十一、Metrics部署

1、将Master01节点的front-proxy-ca.crt复制到所有Node节点

2、安装metrics server

3、查看状态

4、查看指标

十二、Dashboard部署

1、安装指定版本dashboard

2、安装最新版dashboard

3、登录Dashboard

4、更改dashboard的svc为NodePort

5、查看token值


一、集群规划

主机名IP地址说明
k8s-master01192.168.126.140master节点1
k8s-master02192.168.126.141master节点2
k8s-master03192.168.126.142master节点3
k8s-master-lb192.168.126.236

keepalived虚拟IP

k8s-node01192.168.126.143worker节点1
k8s-node02192.168.126.144worker节点2
配置信息备注
系统版本CentOS 7.9.2009
Docker版本20.10.17
宿主机网段192.168.126.0/24
Pod网段172.16.0.0/12
Service网段10.96.0.0/16

二、基础环境配置

1、配置/etc/hosts文件

[root@localhost ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.126.140 k8s-master01
192.168.126.141 k8s-master02
192.168.126.142 k8s-master03
192.168.126.236 k8s-master-lb # 如果不是高可用集群,该IP为Master01的IP
192.168.126.143 k8s-node01
192.168.126.144 k8s-node02

2、设置主机名

hostnamectl set-hostname k8s-master01
hostnamectl set-hostname k8s-master02
hostnamectl set-hostname k8s-master03
hostnamectl set-hostname k8s-node01
hostnamectl set-hostname k8s-node02

3、安装yum源(Centos7)

所有主机执行

[root@k8s-master01 ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
[root@k8s-master01 ~]# yum install -y yum-utilsdevice-mapper-persistent-data lvm2
[root@k8s-master01 ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@k8s-master01 ~]# cat <<EOF >/etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpghttps://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
[root@k8s-master01 ~]# sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e'/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo

4、必备工具安装

[root@localhost ~]# yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git -y

5、所有节点关闭firewalld 、dnsmasq、selinux

CentOS7需要关闭NetworkManager,CentOS8不需要

[root@localhost ~]# systemctl disable --now firewalld
[root@localhost ~]# systemctl disable --now dnsmasq
[root@localhost ~]# systemctl disable --now NetworkManager
[root@localhost ~]# setenforce 0
[root@localhost ~]# sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux
[root@localhost ~]# sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config

6、关闭swap分区

[root@localhost ~]# swapoff -a && sysctl -w vm.swappiness=0
[root@localhost ~]# sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab

7、所有节点同步时间

安装ntpdate
[root@localhost ~]# rpm -ivh http://mirrors.wlnmp.com/centos/wlnmp-release-centos.noarch.rpm
[root@localhost ~]# yum install ntpdate -y

所有节点同步时间。时间同步配置如下:
[root@localhost ~]# ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
[root@localhost ~]# echo 'Asia/Shanghai' >/etc/timezone
[root@localhost ~]# ntpdate time2.aliyun.com
30 Jun 12:38:19 ntpdate[12176]: adjust time server 203.107.6.88 offset -0.002743 sec

# 加入到crontab
[root@localhost ~]# crontab -e
*/5 * * * * /usr/sbin/ntpdate time2.aliyun.com

8、所有节点配置limit

[root@localhost ~]# ulimit -SHn 65535

[root@localhost ~]# vim /etc/security/limits.conf
# 末尾添加如下内容
* soft nofile 65536
* hard nofile 131072
* soft nproc 65535
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited 

9、Master01节点免密钥登录其他节点

[root@k8s-master01 ~]# ssh-keygen -t rsa

Master01配置免密码登录其他节点
[root@k8s-master01 ~]# for i in k8s-master01 k8s-master02 k8s-master03 k8s-node01 k8s-node02;do ssh-copy-id -i .ssh/id_rsa.pub $i;done

10、Master01下载安装文件

 [root@k8s-master01 ~]# cd /opt/ ; git clone https://github.com/dotbalo/k8s-ha-install.git
Cloning into 'k8s-ha-install'...
remote: Enumerating objects: 12, done.
remote: Counting objects: 100% (12/12), done.
remote: Compressing objects: 100% (11/11), done.
remote: Total 461 (delta 2), reused 5 (delta 1), pack-reused 449
Receiving objects: 100% (461/461), 19.52 MiB | 4.04 MiB/s, done.
Resolving deltas: 100% (163/163), done.

三、内核升级

1、创建文件夹(所有节点)

 [root@k8s-master01 ~]# mkdir -p /opt/kernel

2、在master01节点下载内核

 [root@k8s-master01 ~]# cd /opt/kernel
 [root@k8s-master01 ~]# wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm
 [root@k8s-master01 ~]# wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm

3、从master01节点传到其他节点

 [root@k8s-master01 ~]# for i in k8s-master02 k8s-master03 k8s-node01 k8s-node02;do scp kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm $i:/opt/kernel; done

4、所有节点安装内核

cd /opt/kernel && yum localinstall -y kernel-ml*

5、所有节点更改内核启动顺序

[root@k8s-master01 ~]# grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg 
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-4.19.12-1.el7.elrepo.x86_64
Found initrd image: /boot/initramfs-4.19.12-1.el7.elrepo.x86_64.img
Found linux image: /boot/vmlinuz-3.10.0-693.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-693.el7.x86_64.img
Found linux image: /boot/vmlinuz-0-rescue-1c01f6af1f1d40ccb9988c844650f9a3
Found initrd image: /boot/initramfs-0-rescue-1c01f6af1f1d40ccb9988c844650f9a3.img

 
[root@k8s-master01 kernel]# grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"

6、检查默认内核是不是4.19

[root@k8s-master01 kernel]# grubby --default-kernel
/boot/vmlinuz-4.19.12-1.el7.elrepo.x86_64

7、所有节点重启,然后检查内核是不是4.19

[root@k8s-master01 kernel]# reboot
[root@k8s-master01 ~]# uname -a
Linux k8s-master01 4.19.12-1.el7.elrepo.x86_64 #1 SMP Fri Dec 21 11:06:36 EST 2018 x86_64 x86_64 x86_64 GNU/Linux

8、所有节点安装配置ipvsadm

[root@k8s-master01 ~]# yum install ipvsadm ipset sysstat conntrack libseccomp -y

9、所有节点配置ipvs模块

          注意:在内核4.19+版本nf_conntrack_ipv4已经改为nf_conntrack, 4.18以下使用nf_conntrack_ipv4即可:

[root@k8s-master01 ~]# modprobe -- ip_vs
[root@k8s-master01 ~]# modprobe -- ip_vs_rr
[root@k8s-master01 ~]# modprobe -- ip_vs_wrr
[root@k8s-master01 ~]# modprobe -- ip_vs_sh
[root@k8s-master01 ~]# modprobe -- nf_conntrack

[root@k8s-master01 ~]# vim /etc/modules-load.d/ipvs.conf
# 加入以下内容
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip


# 重新启动模块:
[root@k8s-master01 ~]# systemctl enable --now systemd-modules-load.service

10、检查是否加载

[root@k8s-master01 ~]# lsmod | grep -e ip_vs -e nf_conntrack
ip_vs_sh               16384  0 
ip_vs_wrr              16384  0 
ip_vs_rr               16384  0 
ip_vs                 151552  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          143360  1 ip_vs
nf_defrag_ipv6         20480  1 nf_conntrack
nf_defrag_ipv4         16384  1 nf_conntrack
libcrc32c              16384  3 nf_conntrack,xfs,ip_vs

11、开启一些k8s集群中必须的内核参数,所有节点配置k8s内核

[root@k8s-master01 ~]# cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
net.ipv4.conf.all.route_localnet = 1
 
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720
 
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF

# 重新加载
[root@k8s-master01 ~]# sysctl --system

12、所有节点配置完内核后,重启服务器,保证重启后内核依旧加载

[root@k8s-master01 ~]# reboot
[root@k8s-master01 ~]# lsmod | grep --color=auto -e ip_vs -e nf_conntrack
ip_vs_ftp              16384  0 
nf_nat                 32768  1 ip_vs_ftp
ip_vs_sed              16384  0 
ip_vs_nq               16384  0 
ip_vs_fo               16384  0 
ip_vs_sh               16384  0 
ip_vs_dh               16384  0 
ip_vs_lblcr            16384  0 
ip_vs_lblc             16384  0 
ip_vs_wrr              16384  0 
ip_vs_rr               16384  0 
ip_vs_wlc              16384  0 
ip_vs_lc               16384  0 
ip_vs                 151552  24 ip_vs_wlc,ip_vs_rr,ip_vs_dh,ip_vs_lblcr,ip_vs_sh,ip_vs_fo,ip_vs_nq,ip_vs_lblc,ip_vs_wrr,ip_vs_lc,ip_vs_sed,ip_vs_ftp
nf_conntrack          143360  2 nf_nat,ip_vs
nf_defrag_ipv6         20480  1 nf_conntrack
nf_defrag_ipv4         16384  1 nf_conntrack
libcrc32c              16384  4 nf_conntrack,nf_nat,xfs,ip_vs

四、Runtime安装

1、Containerd作为Runtime

1.1 所有节点安装docker-ce-20.10

[root@k8s-master01 ~]# yum install docker-ce-20.10.* docker-ce-cli-20.10.* containerd -y
[root@k8s-master01 ~]# docker version
Client: Docker Engine - Community
 Version:           20.10.17
 API version:       1.41
 Go version:        go1.17.11
 Git commit:        100c701
 Built:             Mon Jun  6 23:05:12 2022
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

1.2 首先配置Containerd所需的模块(所有节点)

[root@k8s-master01 ~]# cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
 overlay
 br_netfilter
 EOF

1.3 所有节点加载模块

[root@k8s-master01 ~]# modprobe -- overlay
[root@k8s-master01 ~]# modprobe -- br_netfilter

1.4 所有节点,配置Containerd所需的内核

[root@k8s-master01 ~]# cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
> net.bridge.bridge-nf-call-iptables  = 1
> net.ipv4.ip_forward                 = 1
> net.bridge.bridge-nf-call-ip6tables = 1
> EOF
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1

1.5 所有节点加载内核

[root@k8s-master01 ~]# sysctl --system

1.6 所有节点配置Containerd的配置文件

[root@k8s-master01 ~]# mkdir -p /etc/containerd

[root@k8s-master01 ~]# containerd config default | tee /etc/containerd/config.toml

1.7 所有节点将Containerd的Cgroup改为Systemd

[root@k8s-master01 ~]# vim /etc/containerd/config.toml
          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
            BinaryName = ""
            CriuImagePath = ""
            CriuPath = ""
            CriuWorkPath = ""
            IoGid = 0
            IoUid = 0
            NoNewKeyring = false
            NoPivotRoot = false
            Root = ""
            ShimCgroup = ""
            SystemdCgroup = true

找到containerd.runtimes.runc.options,添加SystemdCgroup = true(如果已存在直接修改,否则会报错),如下图所示:

1.8 所有节点将sandbox_image的Pause镜像改成符合自己版本的地址

        registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6

[root@k8s-master01 ~]# vim /etc/containerd/config.toml
        sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6"

1.9 所有节点启动Containerd,并配置开机自启动

[root@k8s-master01 ~]# systemctl daemon-reload
[root@k8s-master01 ~]# systemctl enable --now containerd
[root@k8s-master01 ~]# systemctl status containerd.service
● containerd.service - containerd container runtime
   Loaded: loaded (/usr/lib/systemd/system/containerd.service; enabled; vendor preset: disabled)
   Active: active (running) since 四 2022-06-30 14:23:16 CST; 2min 36s ago
     Docs: https://containerd.io
  Process: 1720 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
 Main PID: 1722 (containerd)
    Tasks: 9
   Memory: 25.6M
   CGroup: /system.slice/containerd.service
           └─1722 /usr/bin/containerd

6月 30 14:23:16 k8s-master01 containerd[1722]: time="2022-06-30T14:23:16.759584551+08:00" level=info msg="Start subscribing containerd event"
6月 30 14:23:16 k8s-master01 containerd[1722]: time="2022-06-30T14:23:16.759661538+08:00" level=info msg="Start recovering state"
6月 30 14:23:16 k8s-master01 containerd[1722]: time="2022-06-30T14:23:16.759744630+08:00" level=info msg="Start event monitor"
6月 30 14:23:16 k8s-master01 containerd[1722]: time="2022-06-30T14:23:16.759769518+08:00" level=info msg="Start snapshots syncer"
6月 30 14:23:16 k8s-master01 containerd[1722]: time="2022-06-30T14:23:16.759785914+08:00" level=info msg="Start cni network conf syncer for default"
6月 30 14:23:16 k8s-master01 containerd[1722]: time="2022-06-30T14:23:16.759797273+08:00" level=info msg="Start streaming server"
6月 30 14:23:16 k8s-master01 containerd[1722]: time="2022-06-30T14:23:16.760121521+08:00" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
6月 30 14:23:16 k8s-master01 containerd[1722]: time="2022-06-30T14:23:16.760180705+08:00" level=info msg=serving... address=/run/containerd/containerd.sock
6月 30 14:23:16 k8s-master01 containerd[1722]: time="2022-06-30T14:23:16.760285440+08:00" level=info msg="containerd successfully booted in 0.053874s"
6月 30 14:23:16 k8s-master01 systemd[1]: Started containerd container runtime.

1.10 所有节点配置crictl客户端连接的运行时位置

[root@k8s-master01 ~]# cat > /etc/crictl.yaml <<EOF
 runtime-endpoint: unix:///run/containerd/containerd.sock
 image-endpoint: unix:///run/containerd/containerd.sock
 timeout: 10
 debug: false
 EOF

2、Docker作为Runtime

2.1 所有节点安装docker-ce 20.10:

[root@k8s-master01 ~]# yum install docker-ce-20.10.* docker-ce-cli-20.10.* -y

2.2 由于新版Kubelet建议使用systemd,所以把Docker的CgroupDriver也改成systemd

[root@k8s-master01 ~]# mkdir /etc/docker
[root@k8s-master01 ~]# cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

2.3 所有节点设置开机自启动Docker

[root@k8s-master01 ~]# systemctl daemon-reload && systemctl enable --now docker

五、安装Kubernetes组件

1、首先在Master01节点查看最新的Kubernetes版本是多少

[root@k8s-master01 ~]# yum list kubeadm.x86_64 --showduplicates | sort -r

2、所有节点安装1.23.10最新版本kubeadm、kubelet和kubectl

[root@k8s-master01 ~]# yum install kubeadm-1.23.10 kubelet-1.23.10 kubectl-1.23.10 -y

3、如果选择的是Containerd作为的Runtime,需要更改Kubelet的配置使用Containerd作为Runtime

[root@k8s-master01 ~]# cat>/etc/sysconfig/kubelet<<EOF
KUBELET_KUBEADM_ARGS="--container-runtime=remote--runtime-request-timeout=15m--container-runtime-endpoint=unix:///run/containerd/containerd.sock"
EOF

注意:

        如果不是采用Containerd作为的Runtime,请不要执行上述命令。

4、所有节点设置Kubelet开机自启动

(由于还未初始化,没有kubelet的配置文件,此时kubelet无法启动,无需管理)

[root@k8s-master01 ~]# systemctl daemon-reload
[root@k8s-master01 ~]# systemctl enable --now kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

        此时kubelet是起不来的,日志会有报错不影响!

六、高可用组件安装

注意:

        如果不是高可用集群,haproxy和keepalived无需安装

        公有云要用公有云自带的负载均衡,比如阿里云的SLB,腾讯云的ELB,用来替代haproxy和keepalived,因为公有云大部分都是不支持keepalived的,另外如果用阿里云的话,kubectl控制端不能放在master节点,推荐使用腾讯云,因为阿里云的slb有回环的问题,也就是slb代理的服务器不能反向访问SLB,但是腾讯云修复了这个问题。

1、所有Master节点通过yum安装HAProxy和KeepAlived

[root@k8s-master01 ~]# yum install keepalived haproxy -y

2、所有Master节点配置HAProxy(所有Master节点的HAProxy配置相同)

[root@k8s-master01 ~]# mkdir /etc/haproxy -p
[root@k8s-master01 ~]# vim /etc/haproxy/haproxy.cfg 
global
  maxconn  2000
  ulimit-n  16384
  log  127.0.0.1 local0 err
  stats timeout 30s

defaults
  log global
  mode  http
  option  httplog
  timeout connect 5000
  timeout client  50000
  timeout server  50000
  timeout http-request 15s
  timeout http-keep-alive 15s

frontend monitor-in
  bind *:33305
  mode http
  option httplog
  monitor-uri /monitor

frontend k8s-master
  bind 0.0.0.0:16443
  bind 127.0.0.1:16443
  mode tcp
  option tcplog
  tcp-request inspect-delay 5s
  default_backend k8s-master

backend k8s-master
  mode tcp
  option tcplog
  option tcp-check
  balance roundrobin
  default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
  server k8s-master01 192.168.126.140:6443  check
  server k8s-master02 192.168.126.141:6443  check
  server k8s-master03 192.168.126.142:6443  check

3、所有Master节点配置KeepAlived

 [root@k8s-master01 pki]# vim /etc/keepalived/keepalived.conf 

Master01:

[root@k8s-master01~]# vim /etc/keepalived/keepalived.conf 
! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
script_user root
    enable_script_security
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5
    weight -5
    fall 2  
rise 1
}
vrrp_instance VI_1 {
    state MASTER
    interface ens33
    mcast_src_ip 192.168.126.140
    virtual_router_id 51
    priority 101
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.126.236
    }
    track_script {
       chk_apiserver
    }
}

 master02:

[root@k8s-master01~]# vim /etc/keepalived/keepalived.conf 
! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
script_user root
    enable_script_security
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5
    weight -5
    fall 2  
rise 1
}
vrrp_instance VI_1 {
    state MASTER
    interface ens33
    mcast_src_ip 192.168.126.141
    virtual_router_id 51
    priority 101
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.126.236
    }
    track_script {
       chk_apiserver
    }
}

 master03:

[root@k8s-master03~]# vim /etc/keepalived/keepalived.conf 
! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
script_user root
    enable_script_security
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5
    weight -5
    fall 2  
rise 1
}
vrrp_instance VI_1 {
    state MASTER
    interface ens33
    mcast_src_ip 192.168.126.142
    virtual_router_id 51
    priority 101
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.126.236
    }
    track_script {
       chk_apiserver
    }
}

4、所有master节点配置KeepAlived健康检查文件

[root@k8s-master01 ~]# vim /etc/keepalived/check_apiserver.sh 
#!/bin/bash
   

err=0
for k in $(seq 1 3)
do
    check_code=$(pgrep haproxy)
    if [[ $check_code == "" ]]; then
        err=$(expr $err + 1)
        sleep 1
        continue
    else
        err=0
        break
    fi
done

if [[ $err != "0" ]]; then
    echo "systemctl stop keepalived"
    /usr/bin/systemctl stop keepalived
    exit 1
else
    exit 0
fi

5、启动haproxy和keepalived

[root@k8s-master01 ~]# systemctl daemon-reload
[root@k8s-master01 ~]# systemctl enable --now haproxy
Created symlink from /etc/systemd/system/multi-user.target.wants/haproxy.service to /usr/lib/systemd/system/haproxy.service.
[root@k8s-master01 ~]# systemctl enable --now keepalived
Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service.

6、重要:如果安装了keepalived和haproxy,需要测试keepalived是否是正常的

[root@k8s-master01 ~]# ping 192.168.126.236 -c 4
PING 192.168.126.236 (192.168.126.236) 56(84) bytes of data.
64 bytes from 192.168.126.236: icmp_seq=1 ttl=64 time=0.055 ms
64 bytes from 192.168.126.236: icmp_seq=2 ttl=64 time=0.055 ms
64 bytes from 192.168.126.236: icmp_seq=3 ttl=64 time=0.074 ms
64 bytes from 192.168.126.236: icmp_seq=4 ttl=64 time=0.057 ms

--- 192.168.126.236 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3084ms
rtt min/avg/max/mdev = 0.055/0.060/0.074/0.009 ms



[root@k8s-master01 ~]# telnet 192.168.126.236 16443
Trying 192.168.126.236...
Connected to 192.168.126.236.
Escape character is '^]'.
Connection closed by foreign host.

七、集群初始化

官方初始化文档:

Creating Highly Available Clusters with kubeadm | Kubernetes

1、Master01节点创建kubeadm-config.yaml配置文件如下

注意:

        如果不是高可用集群,192.168.126.200:16443改为master01的地址,16443改为apiserver的端口,默认是6443,注意更改kubernetesVersion的值和自己服务器kubeadm的版本一致:kubeadmversion)

注意:

        以下文件内容,宿主机网段、podSubnet网段、serviceSubnet网段不能重复

以下操作在master01:

[root@k8s-master01 opt]# vim kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: 7t2weq.bjbawausm0jaxury
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.126.140
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock  # 如果是Docker作为Runtime配置此项
  #criSocket: /run/containerd/containerd.sock # 如果是Containerd作为Runtime配置此项
  name: k8s-master01
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  certSANs:
  - 192.168.126.236
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 192.168.126.236:16443
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.23.10 # 更改此处的版本号和kubeadm version一致
networking:
  dnsDomain: cluster.local
  podSubnet: 172.16.0.0/12
  serviceSubnet: 10.96.0.0/16
scheduler: {}

2、更新kubeadm文件

[root@k8s-master01 opt]# kubeadm config migrate --old-config kubeadm-config.yaml --new-config new.yaml

3、将new.yaml文件复制到其他master节点

[root@k8s-master01 opt]# for i in k8s-master02 k8s-master03; do scp /root/new.yaml $i:/root/; done

4、之后所有Master节点提前下载镜像,可以节省初始化时间(其他节点不需要更改任何配置,包括IP地址也不需要更改)

[root@k8s-master01 opt]# kubeadm config images pull --config new.yaml
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.23.10
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.23.10
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.23.10
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.23.10
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.1-0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.6

5、所有节点设置开机自启动kubelet

[root@k8s-master01 ~]# systemctl enable --now kubelet

如果启动失败无需管理,初始化成功以后即可启动 

6、Master01节点初始化,初始化以后会在/etc/kubernetes目录下生成对应的证书和配置文件,之后其他Master节点加入Master01即可

[root@k8s-master01 ~]# kubeadm init --config new.yaml  --upload-certs
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

   kubeadm join 192.168.126.236:16443 --token 7t2weq.bjbawausm0jaxury \
	--discovery-token-ca-cert-hash sha256:60edae79ba574b8c4153b08018f70515d6794a1fca393c5ce7e248f1ce4ad6cc \
	--control-plane --certificate-key 65d6dea52fb5c545bc31e7962f8996fd959108ef1c665601dd9c94aeb244a212


Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

初始化成功以后,会产生Token值,用于其他节点加入时使用,因此要记录下初始化成功生成的token值(令牌值): 

kubeadm join 192.168.126.236:16443 --token 7t2weq.bjbawausm0jaxury \
	--discovery-token-ca-cert-hash sha256:60edae79ba574b8c4153b08018f70515d6794a1fca393c5ce7e248f1ce4ad6cc 
[root@k8s-master01 ~]# mkdir -p $HOME/.kube
[root@k8s-master01 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master01 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

如果初始化失败,重置后再次初始化,命令如下(没有失败不要执行):

[root@k8s-master01 ~]# kubeadm reset -f ; ipvsadm --clear  ; rm -rf~/.kube

7、Master01节点配置环境变量,用于访问Kubernetes集群

[root@k8s-master01 ~]# cat<<EOF >> /root/.bashrc
export KUBECONFIG=/etc/kubernetes/admin.conf
EOF


[root@k8s-master01 ~]# source /root/.bashrc

8、查看节点状态

[root@k8s-master01 kubernetes]# kubectl get node
NAME           STATUS     ROLES                  AGE   VERSION
k8s-master01   NotReady   control-plane,master   12m   v1.23.10

9、采用初始化安装方式,所有的系统组件均以容器的方式运行并且在kube-system命名空间内,此时可以查看Pod状态

[root@k8s-master01 kubernetes]# kubectl get po -n kube-system
NAME                                   READY   STATUS    RESTARTS   AGE
coredns-65c54cc984-nf6tj               0/1     Pending   0          12m
coredns-65c54cc984-w6mfw               0/1     Pending   0          12m
etcd-k8s-master01                      1/1     Running   0          12m
kube-apiserver-k8s-master01            1/1     Running   0          12m
kube-controller-manager-k8s-master01   1/1     Running   0          12m
kube-proxy-8zjk4                       1/1     Running   0          12m
kube-scheduler-k8s-master01            1/1     Running   0          12m

八、高可用Master

注意:

        以下步骤是上述init命令产生的Token过期了才需要执行以下步骤,如果没有过期不需要执行,直接join即可

1、Token过期后生成新的token

[root@k8s-master01 ~]# kubeadm token create --print-join-command

2、Master需要生成--certificate-key

[root@k8s-master01 kubernetes]# kubeadm init phase upload-certs  --upload-certs
I0825 09:59:19.524447    4733 version.go:255] remote version is much newer: v1.25.0; falling back to: stable-1.23
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
504eb2cde60ea618eec9811fbac12a7e75be96684ce92195cf662e3f9d723c0f

Token没有过期直接执行Join就行了 

3、其他master加入集群,master02、master03执行

[root@k8s-master02 ~]# kubeadm join 192.168.126.236:16443 --token 7t2weq.bjbawausm0jaxury \
> --discovery-token-ca-cert-hash sha256:60edae79ba574b8c4153b08018f70515d6794a1fca393c5ce7e248f1ce4ad6cc \
> --control-plane --certificate-key 72d5f4caf0e108e38c9f4ba472458ee8f94abf3834612661ec438d37f9bfa4b8


This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.
[root@k8s-master02 ~]# mkdir -p $HOME/.kube
[root@k8s-master02 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master02 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

4、查看当前状态

[root@k8s-master01 ~]# kubectl get node
NAME           STATUS     ROLES                  AGE     VERSION
k8s-master01   NotReady   control-plane,master   8m51s   v1.23.10
k8s-master02   NotReady   control-plane,master   2m23s   v1.23.10
k8s-master03   NotReady   control-plane,master   44s     v1.23.10

九、Node节点的配置

Node节点上主要部署公司的一些业务应用,生产环境中不建议Master节点部署系统组件之外的其他Pod,测试环境可以允许Master节点部署Pod以节省系统资源。

1、将node节点加入集群中

[root@k8s-node01 ~]# kubeadm join 192.168.126.236:16443 --token 7t2weq.bjbawausm0jaxury \
	--discovery-token-ca-cert-hash sha256:60edae79ba574b8c4153b08018f70515d6794a1fca393c5ce7e248f1ce4ad6cc
 
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@k8s-node02 ~]# kubeadm join 192.168.126.200:16443 --token 7t2weq.bjbawausm0jaxury \
> --discovery-token-ca-cert-hash sha256:df9d9ab9c2a603b073ffd3c2b7abc00d9c5ccd5eaa33af24a882984dfc8a05ae
 
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

2、所有节点初始化完成后,查看集群状态

[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS     ROLES                  AGE     VERSION
k8s-master01   NotReady   control-plane,master   10m     v1.23.10
k8s-master02   NotReady   control-plane,master   4m      v1.23.10
k8s-master03   NotReady   control-plane,master   2m21s   v1.23.10
k8s-node01     NotReady   <none>                 69s     v1.23.10
k8s-node02     NotReady   <none>                 37s     v1.23.10

十、Calico组件的安装

1、以下步骤只在master01执行,切换分支

[root@k8s-master01 ~]# cd /opt/k8s-ha-install/
[root@k8s-master01 k8s-ha-install]# git checkout manual-installation-v1.23.x
Branch manual-installation-v1.23.x set up to track remote branch manual-installation-v1.23.x from origin.
Switched to a new branch 'manual-installation-v1.23.x'
[root@k8s-master01 k8s-ha-install]# cd calico/

2、修改Pod网段

[root@k8s-master01 calico]# cat /etc/kubernetes/manifests/kube-controller-manager.yaml | grep cluster-cidr= 
    - --cluster-cidr=172.16.0.0/12
[root@k8s-master01 calico]# POD_SUBNET=`cat /etc/kubernetes/manifests/kube-controller-manager.yaml | grep cluster-cidr= | awk -F= '{print $NF}'`

[root@k8s-master01 calico]# sed -i "s#POD_CIDR#${POD_SUBNET}#g" calico.yaml

[root@k8s-master01 calico]# cat calico.yaml | grep 172.16.0.0/12
              value: "172.16.0.0/12"

3、部署calico

[root@k8s-master01 calico]# kubectl apply -f calico.yaml

4、查看容器和节点状态

[root@k8s-master01 calico]# kubectl get po -n kube-system -owide | grep calico 
calico-kube-controllers-6f6595874c-8xls4   1/1     Running   0               2m31s   172.17.125.3      k8s-node01     <none>           <none>
calico-node-2d78c                          1/1     Running   0               2m31s   192.168.126.140   k8s-master01   <none>           <none>
calico-node-lngs7                          1/1     Running   0               2m31s   192.168.126.142   k8s-master03   <none>           <none>
calico-node-lp787                          1/1     Running   0               2m31s   192.168.126.143   k8s-node01     <none>           <none>
calico-node-njwlv                          1/1     Running   0               2m31s   192.168.126.144   k8s-node02     <none>           <none>
calico-node-p7c8p                          1/1     Running   0               2m31s   192.168.126.141   k8s-master02   <none>           <none>
calico-typha-6b6cf8cbdf-jxph8              1/1     Running   0               2m31s   192.168.126.143   k8s-node01     <none>           <none>

十一、Metrics部署

在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率。

1、将Master01节点的front-proxy-ca.crt复制到所有Node节点

[root@k8s-master01 ~]# scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-node01:/etc/kubernetes/pki/front-proxy-ca.crt
front-proxy-ca.crt                                                                                                                                        100% 1115     1.1MB/s   00:00    
[root@k8s-master01 ~]# scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-node02:/etc/kubernetes/pki/front-proxy-ca.crt
front-proxy-ca.crt                                                                                                                                        100% 1115   878.3KB/s   00:00    

2、安装metrics server

[root@k8s-master01 ~]# cd /opt/k8s-ha-install/kubeadm-metrics-server/
[root@k8s-master01 kubeadm-metrics-server]# ls
comp.yaml
[root@k8s-master01 kubeadm-metrics-server]# kubectl create -f comp.yaml 
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created

3、查看状态

[root@k8s-master01 kubeadm-metrics-server]# kubectl get po -n kube-system -l k8s-app=metrics-server
NAME                              READY   STATUS    RESTARTS   AGE
metrics-server-5cf8885b66-kgf7p   1/1     Running   0          43s

4、查看指标

[root@k8s-master01 ~]# kubectl top po -A
NAMESPACE     NAME                                       CPU(cores)   MEMORY(bytes)   
kube-system   calico-kube-controllers-6f6595874c-9tbsk   2m           22Mi            
kube-system   calico-node-4qnvm                          32m          130Mi           
kube-system   calico-node-8d8tz                          43m          104Mi           
kube-system   calico-node-ckqs6                          44m          102Mi           
kube-system   calico-node-nsmvb                          34m          107Mi           
kube-system   calico-typha-6b6cf8cbdf-92n6h              3m           24Mi            
kube-system   coredns-65c54cc984-nf6tj                   1m           18Mi            
kube-system   coredns-65c54cc984-w6mfw                   1m           16Mi            
kube-system   etcd-k8s-master01                          38m          68Mi            
kube-system   etcd-k8s-master02                          32m          78Mi            
kube-system   kube-apiserver-k8s-master01                43m          420Mi           
kube-system   kube-apiserver-k8s-master02                42m          302Mi           
kube-system   kube-controller-manager-k8s-master01       17m          61Mi            
kube-system   kube-controller-manager-k8s-master02       3m           55Mi            
kube-system   kube-proxy-4l4j6                           1m           18Mi            
kube-system   kube-proxy-8zjk4                           1m           21Mi            
kube-system   kube-proxy-qv2vs                           1m           15Mi            
kube-system   kube-proxy-r29cc                           1m           37Mi            
kube-system   kube-scheduler-k8s-master01                3m           24Mi            
kube-system   kube-scheduler-k8s-master02                3m           41Mi            
kube-system   metrics-server-5cf8885b66-kgf7p            4m           14Mi   
[root@k8s-master01 ~]# kubectl top node
NAME           CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8s-master01   207m         10%    1713Mi          44%       
k8s-master02   185m         9%     1172Mi          62%       
k8s-master03   185m         9%     1172Mi          62% 
k8s-node01     102m         5%     773Mi           41%       
k8s-node02     102m         5%     748Mi           40% 

十二、Dashboard部署

Dashboard用于展示集群中的各类资源,同时也可以通过Dashboard实时查看Pod的日志和在容器中执行一些命令等。

1、安装指定版本dashboard

[root@k8s-master01 ~]# cd /opt/k8s-ha-install/dashboard/
[root@k8s-master01 dashboard]# ls
dashboard-user.yaml  dashboard.yam
[root@k8s-master01 dashboard]# kubectl  create -f .

2、安装最新版dashboard

官方GitHub地址:GitHub - kubernetes/dashboard: General-purpose web UI for Kubernetes clusters

可以在官方dashboard查看到最新版dashboard:

[root@k8s-master01 ~]# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.3/aio/deploy/recommended.yaml

[root@k8s-master01 ~]# vim admin.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
---
apiVersion:rbac.authorization.k8s.io/v1
kind:ClusterRoleBinding 
metadata: 
  name: admin-user
  annotations:
   rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind:ServiceAccount
  name: admin-user
  namespace: kube-system
  
[root@k8s-master01 ~]# kubectl apply -f admin.yaml -n kube-system

3、登录Dashboard

在谷歌浏览器(Chrome)启动文件中加入启动参数,用于解决无法访问Dashboard的问题

--test-type--ignore-certificate-errors

 

4、更改dashboard的svc为NodePort

[root@k8s-master01 dashboard]# kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard

[root@k8s-master01 dashboard]# kubectl get svc -n kubernetes-dashboard 
NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.96.198.150   <none>        8000/TCP        5m46s
kubernetes-dashboard        NodePort    10.96.211.255   <none>        443:31244/TCP   5m47s

根据自己的实例端口号,通过任意安装了kube-proxy的宿主机的IP+端口即可访问到dashboard:

访问Dashboard:https://192.168.126.140:31244,选择登录方式为令牌(即token方式)

5、查看token值

[root@k8s-master01 dashboard]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Name:         admin-user-token-svbt5
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: b94dc4f0-e240-4eb1-9a8c-660437f5ba68

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1099 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IlVOd3htbXdYd0NmaHdIUEVkdU5jSURDVGJOSE55MW13WEVFWjJJeHhvR00ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3Vud
C9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXN2YnQ1Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJiOTRkYzRmMC1lMjQwLTRlYjEtOWE4Yy02NjA0MzdmNWJhNjgiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.CzrD4AasXENVZ8CpUcwlFqREybF6GRhEIdR9efQMoE1WkVEXDrGHTpH63IrnhAazaQLmvq1XtABbA7Upr5-9ApEzZnK9hRwXlZyPaz3QyqIZLD3a2_K_uOPm9a68OLzwSXobnF_LveFxGgv7uPsOTEVTKgBQYpo03jOSIt9znhXivSpNcofSIs12_MMC5kFHJYT8GlwDI4fnXdRxvUxnDD__XfLvNSBSxB_d2N498tqPL-JcLmVLYOhmiSc9WPN2zwwwpGXkoTc0dT1m2tOg4i5OTSZEMVZJm07rK7y6Cs5SvKmICaAjgypL5A3KIvygZCayt_g96cWvAaNgUCLoPg

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐