文章目录

kubeadm部署高可用集群

在这里插入图片描述

注意事项

  • 最新的版本不一定好,但相对于旧版本,核心功能稳定,但新增功能、接口相对不稳

  • 学会一个版本的 高可用部署,其他版本操作都差不多

  • 不要使用带中文的服务器

  • 宿主机尽量不要使用克隆(可能在后期calico网络部署时,pod网络不同的问题),使用全新的机器

  • 宿主机尽量升级到CentOS 7.9

  • 内核kernel升级到 4.19+ 这种稳定的内核

  • 部署k8s版本时,尽量找 1.xx.5 这种大于5的小版本(这种一般是比较稳定的版本)

  • k8s版本在一年之内进行更新一次

  • master在测试环境 为 2核3G

  • node节点在测试环境为 2核2G

k8s部署 二进制与高可用的区别

二进制部署

  • 部署难,管理方便,集群伸展性能好
  • 更稳定,集群规模到达一定的规模(几百个节点、上万个Pod),二进制稳定性是要高于kubeadm部署
  • 遇到故障,宿主机起来了,进程也会起来

kubeadm部署

  • 部署简单,管理难
  • 是以一种容器管理容器的方式允许的组件及服务,故障恢复时间比二进制慢
  • 遇到故障,启动宿主机,在启动进程,最后去启动容器,集群才能恢复,速度比二进制慢

一、环境配置

Sys-VersionhostnameIP地址性能
CentOS7.9k8s-master1192.168.178.512核3G
CentOS7.9k8s-master2192.168.178.522核3G
CentOS7.9k8s-master3192.168.178.532核3G
CentOS7.9k8s-node1192.168.178.542核2G
CentOS7.9k8s-node2192.168.178.552核2G
Sys-VersionAPP
v1.20.12Kubernetes
v20.10.10Docker
v3.15.3Calico
v2.0.4Dashboard

1、所有节点修改主机名

# hostnamectl set-hostname k8s-master1
# hostnamectl set-hostname k8s-master2
# hostnamectl set-hostname k8s-master3
# hostnamectl set-hostname k8s-node1
# hostnamectl set-hostname k8s-node2

2、所有节点进行域名解析

# vi /etc/hosts
192.168.178.51  k8s-master1
192.168.178.52  k8s-master2
192.168.178.53  k8s-master3
192.168.178.200 k8s-master-lb	#此为VIP的地址
192.168.178.54  k8s-node1
192.168.178.55  k8s-node2

3、所有节点关闭相关应用

3.1、防火墙关闭
# systemctl disable --now firewalld
# setenforce 0
# sed -i '/^SELINUX=/cSELINUX=disabled' /etc/selinux/config
3.2、dnsmasq关闭
#没有该目录就直接跳过
# systemctl disable --now dnsmasq
3.3、NetworkManager关闭
# systemctl disable --now NetworkManager
3.4、关闭SWAP分区
# swapoff -a && sysctl -w vm.swappiness=0
# sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab
3.5、检查MAC和product_uuid
# ip link
# cat /sys/class/dmi/id/product_uuid

4、所有节点获取相关软件

4.1、获取aliyun源
# curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
# yum install -y yum-utils device-mapper-persistent-data lvm2
4.2、获取docker-yum源
# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
4.3、配置k8s-Yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo
4.4、安装必备工具
# yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git bash-completion lrzsz -y

5、所有节点时间同步

5.1、安装ntpdate
# rpm -ivh http://mirrors.wlnmp.com/centos/wlnmp-release-centos.noarch.rpm
# yum install ntpdate -y
5.2、时间同步
# ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
# echo 'Asia/Shanghai' >/etc/timezone
# ntpdate time2.aliyun.com
5.3、计划任务时间同步
# systemctl enable --now crond
# crontab -e
添加并保存退出,每隔5秒进行时间同步
*/5 * * * * /usr/sbin/ntpdate time2.aliyun.com
# crontab -l
*/5 * * * * /usr/sbin/ntpdate time2.aliyun.com

6、所有节点实现Linux的资源限制

# ulimit -SHn 65535
# vim /etc/security/limits.conf
#在末尾进行追加,保存退出
* soft nofile 65536
* hard nofile 131072
* soft nproc 65535
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited

7、master1进行免密

生成公钥
# ssh-keygen -t rsa

公钥推送
# for i in k8s-master1 k8s-master2 k8s-master3 k8s-node1 k8s-node2;do ssh-copy-id -i .ssh/id_rsa.pub $i;done

二、内核配置

1、所有节点进行升级update、重启

# yum update -y --exclude=kernel*
# reboot

2、所有节点升级内核

2.1、master1获取RPM包
[root@k8s-master1 ~]# wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm

[root@k8s-master1 ~]# wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm
2.2、master1拷贝至所有节点
# for i in k8s-master2 k8s-master3 k8s-node1 k8s-node2;do scp kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm $i:/root/ ; done
2.3、安装内核
# cd /root/
# yum localinstall -y kernel-ml*
2.3、更改内核启动方式
# grub2-set-default  0 && grub2-mkconfig -o /etc/grub2.cfg

# grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"

# grubby --default-kernel

#reboot

3、所有节点安装与配置ipvsadm

1、安装软件
# yum install ipvsadm ipset sysstat conntrack libseccomp -y
2、配置内核参数

🔺注意:所有节点配置ipvs模块,在内核**4.19+**版本nf_conntrack_ipv4已经改为nf_conntrack, 4.18-以下使用nf_conntrack_ipv4即可

modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack

# vim /etc/modules-load.d/ipvs.conf 
# 加入以下内容,保存退出
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
3、配置生效
# systemctl enable --now systemd-modules-load.service
4、开启k8s必须的内核参数
cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720

net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF

#配置生效
# sysctl --system
5、重启,加载内核
# reboot

#重启之后查看参数是否生效
#lsmod | grep --color=auto -e ip_vs -e nf_conntrack

三、基本组件安装、配置

1、所有节点部署Docker

1.1、安装docker-ce
# yum makecache  && yum -y install docker-ce -y 
1.2、修改 CgroupDriver 为 systemd

🔺**注意:**由于新版kubelet建议使用systemd,所以可以把docker的CgroupDriver改成systemd

# mkdir /etc/docker

cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
1.3、开机自启动Docker
# systemctl daemon-reload && systemctl enable --now docker

# docker info | grep Cgroup
 Cgroup Driver: systemd
 
# docker -v
Docker version 20.10.10, build b485636

2、所有节点安装k8s组件

2.1、查看可安装的k8版本
# yum list kubeadm.x86_64 --showduplicates | sort -r
2.2、安装kubeadm
1、安装指定版本(本实验采用)
# yum install kubeadm-1.20* kubelet-1.20* kubectl-1.20* -y

2、安装最新版本
# yum install kubeadm kubelet kubectl -y
2.3、配置默认的拉取镜像仓库

🔺**注意:**默认配置的pause镜像使用gcr.io仓库, 国内可能无法访问, 所以这里配置Kubelet使用阿里云的pause镜像

cat >/etc/sysconfig/kubelet<<EOF
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2"
EOF
2.4、设置kubelet开机自启
# systemctl daemon-reload
# systemctl enable --now kubelet

四、高可用组件安装、配置

🔺**注意:**如果不是高可用集群,haproxy和keepalived无需安装

公有云要用公有云自带的负载均衡,比如阿里云的SLB,腾讯云的ELB,用来替代haproxy和keepalived,因为公有云大部分都是不支持keepalived的,另外如果用阿里云的话,kubectl控制端不能放在master节点,推荐使用腾讯云,因为阿里云的slb有回环的问题,也就是slb代理的服务器不能反向访问SLB,但是腾讯云修复了这个问题。

1、master节点部署Haproxy

1.1、安装haproxy
# yum -y install haproxy
1.2、编写haproxy配置

所有master节点配置一致

# cd /etc/haproxy/
# cp -r haproxy.cfg  haproxy.cfg.bak

# vim haproxy.cfg
global				#全局配置
  maxconn  2000		#最大连接数
  ulimit-n  16384	#资源限制
  log  127.0.0.1 local0 err		#err 级别日志定义
  stats timeout 30s				# 30秒超时

defaults			#默认配置
  log global		#日志定义					
  mode  http		#使用 http 协议
  option  httplog
  timeout connect 5000		#超时连接 5秒
  timeout client  50000		#客户端超时连接 50秒
  timeout server  50000		#服务端超时连接 50秒
  timeout http-request 15s	#http请求超时 15s
  timeout http-keep-alive 15s	#keepalived长连接保存 15s

frontend monitor-in			#Web监控页面
  bind *:33305				#绑定地址
  mode http					
  option httplog
  monitor-uri /monitor		#地址为 IP:33305/monitor

frontend k8s-master			#前端配置
  bind 0.0.0.0:16443		#绑定地址,通过该地址 IP:16443 可以访问到后端对应的服务器
  bind 127.0.0.1:16443
  mode tcp
  option tcplog
  tcp-request inspect-delay 5s
  default_backend k8s-master

backend k8s-master			#后端配置
  mode tcp
  option tcplog
  option tcp-check			#tcp检测
  balance roundrobin		#轮询策略
  default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
  server k8s-master1	192.168.178.51:6443  check		#对应的k8s集群中的master
  server k8s-master2	192.168.178.52:6443  check
  server k8s-master3	192.168.178.53:6443  check

2、master节点部署keepalived

2.1、安装keepalived
# yum -y install keepalived
# cd /etc/keepalived/
# cp -r keepalived.conf keepalived.conf.bak
# vim keepalived.conf
2.2、配置keepalived
a、master1配置
! Configuration File for keepalived
global_defs {			#全局配置
    router_id k8s-master1	#路由标识符,节点配置不同
	script_user root		#使用脚本的用户
    enable_script_security	#脚本安全性
}
vrrp_script chk_apiserver {			#开启监控脚本
    script "/etc/keepalived/check_apiserver.sh"		#对应目录编写脚本,授权 +x
    interval 5		#每隔5秒去检测一次
    weight -5		#检测命中,权重减少 5
    fall 2  		#总共检测 2次
	rise 1			#命中一次,该主机的权重减5
}
vrrp_instance VI_1 {	#虚拟路由组唯一标识符,同一 "主备" 需配置一样
    state MASTER		#该主机的身份 为 Master
    interface ens33		#本主机的接口
    mcast_src_ip 192.168.178.51		#心跳源,本机IP
    virtual_router_id 51			#虚拟路由ID,同 "主备" 相同
    priority 101					#权重,优先级
    advert_int 2					#j的间隔,每2秒执行一次
    authentication {				#认证,同 "主备" 必须一致
        auth_type PASS				#认证类型
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {				#虚拟VIP的地址
        192.168.178.200
    }
    track_script {					#开启监控检查脚本,遇上方的 名称 一致
       chk_apiserver
    }
}
b、master2配置
! Configuration File for keepalived
global_defs {
    router_id k8s-master2     #路由唯一标识符
	script_user root
    enable_script_security
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
   interval 5
    weight -5
    fall 2  
	rise 1
}
vrrp_instance VI_1 {
    state BACKUP		#角色为 BACKUP	
    interface ens33
    mcast_src_ip 192.168.178.52		#心跳源
    virtual_router_id 51
    priority 100		#权重、优先级为 100
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.178.200
    }
    track_script {
       chk_apiserver
    }
}
c、master3配置
! Configuration File for keepalived
global_defs {
    router_id k8s-master3	#路由标识符
	script_user root
    enable_script_security
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
 	interval 5
    weight -5
    fall 2  
	rise 1
}
vrrp_instance VI_1 {
    state BACKUP		#身份为 BACKUP
    interface ens33
    mcast_src_ip 192.168.178.53		#心跳源
    virtual_router_id 51
    priority 100		#权重、优先级 100
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.178.200
    }
    track_script {
       chk_apiserver
    }
}
2.3、创建对应的脚本文件
# vim /etc/keepalived/check_apiserver.sh

#!/bin/bash
#time:2021-11-04
#user:CloudPeng
#description
#该脚本用于检测haproxy的运行状态,检测失败,停止 keepalived 实现 VIP 的转移
err=0	#失败次数
for k in $(seq 1 3)		#总共检查第三次
do
    check_code=$(pgrep haproxy)		#pgrep 检查正在运行的程序,成功显示进程 PID,不成功显示 空
    if [[ $check_code == "" ]]; then
        err=$(expr $err + 1)
        sleep 1
        continue
    else
        err=0
        break
    fi
done

if [[ $err != "0" ]]; then		#如果检测失败的次数不为0,停止keepalived 实现 VIP 转移
    echo "systemctl stop keepalived"
    /usr/bin/systemctl stop keepalived
    exit 1
else
    exit 0
fi

🔺**注意:**这里还有一个脚本可以选择,两个脚本选择其中一个即可,看自己的需求

#!/bin/bash
#user: CloudPeng
#time: 2021-10-17
#description
#该脚本用于监控Haproxy的状态,当Haproxy的状态为down,干掉keepalived,实现 failover 故障转移

        counter=$(ps -C haproxy --no-heading|wc -l)
         if [ "${counter}" = "0" ]; then
             systemctl restart haproxy
             sleep 5                  #尝试启动一次nginx,停止5秒后再次检测
          counter=$(ps -C haproxy --no-heading|wc -l)
          if [ "${counter}" = "0" ]; then
                 systemctl stop keepalived #如果启动没成功,就杀掉keepalive触发主备>切换
          fi
         fi
2.4、脚本授权
# chmod +x /etc/keepalived/check_apiserver.sh 
# ls -l
total 12
-rwxr-xr-x 1 root root  356 Nov  4 20:09 check_apiserver.sh
-rw-r--r-- 1 root root  590 Nov  4 20:31 keepalived.conf
-rw-r--r-- 1 root root 3598 Nov  4 19:51 keepalived.conf.b

3、master节点启动LB与HA

1、开机自启动
# systemctl daemon-reload
# systemctl enable --now haproxy
# systemctl enable --now keepalived
2、检测VIP可用性
1、在master1上查看VIP:
[root@k8s-master1 keepalived]# ip a | grep ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    inet 192.168.178.51/24 brd 192.168.178.255 scope global ens33
    inet 192.168.178.200/32 scope global ens33	#VIP

2、所有节点ping,测试VIP:
# ping 192.168.178.200
PING 192.168.178.200 (192.168.178.200) 56(84) bytes of data.
64 bytes from 192.168.178.200: icmp_seq=1 ttl=64 time=0.029 ms
64 bytes from 192.168.178.200: icmp_seq=2 ttl=64 time=0.027 ms
64 bytes from 192.168.178.200: icmp_seq=3 ttl=64 time=0.079 ms

# telnet 192.168.178.200 16443
Trying 192.168.178.200...
Connected to 192.168.178.200.
Escape character is '^]'.		#连接成功
Connection closed by foreign host.

五、集群初始化

所有master节点

registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.20.0
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.20.0
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.20.0
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.20.0
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0
registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2

所有node节点

registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.20.0

registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2

registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0 #只在一个节点,使用选择器

1、master1编写yaml配置文件

# vim /root/kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: 7t2weq.bjbawausm0jaxury
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.178.51		#master1 的 IP
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master1			#自定义名称
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  certSANs:
  - 192.168.178.200			#VIP地址
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 192.168.178.200:16443		#VIP地址:端口,加入
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.20.0
networking:
  dnsDomain: cluster.local
  podSubnet: 172.168.0.0/12			#Pod网段
  serviceSubnet: 10.96.0.0/12		#Service网段
scheduler: {}


2、更新 Yaml 文件
# cd /root
# kubeadm config migrate --old-config kubeadm-config.yaml --new-config new.yaml

2、master所有节点提前拉取镜像

2.1、master1拉取镜像
[root@k8s-master1 ~]# kubeadm config images pull --config /root/new.yaml
[root@k8s-master1 ~]# docker image ls
REPOSITORY                             TAG        IMAGE ID       CREATED         SIZE
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy                v1.20.0    10cc881966cf   11 months ago   118MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver            v1.20.0    ca9843d3b545   11 months ago   122MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager   v1.20.0    b9fa1895dcaa   11 months ago   116MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler            v1.20.0    3138b6e3d471   11 months ago   46.4MB
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd                      3.4.13-0   0369cf4303ff   14 months ago   253MB
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns                   1.7.0      bfe3a36ebd25   16 months ago   45.2MB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause                     3.2        80d28bedfe5d   20 months ago   683kB


#拷贝yaml配置文件给其他主机,通过配置文件进行拉取镜像
[root@k8s-master1 ~]# scp -r new.yaml k8s-master2:/root/
[root@k8s-master1 ~]# scp -r new.yaml k8s-master3:/root/
2.2、master2拉取镜像
[root@k8s-master2 ~]# kubeadm config images pull --config /root/new.yaml
[root@k8s-master2 ~]# docker image ls
REPOSITORY                                                                    TAG        IMAGE ID       CREATED         SIZE
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy                v1.20.0    10cc881966cf   11 months ago   118MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver            v1.20.0    ca9843d3b545   11 months ago   122MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager   v1.20.0    b9fa1895dcaa   11 months ago   116MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler            v1.20.0    3138b6e3d471   11 months ago   46.4MB
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd                      3.4.13-0   0369cf4303ff   14 months ago   253MB
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns                   1.7.0      bfe3a36ebd25   16 months ago   45.2MB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause                     3.2        80d28bedfe5d   20 months ago   683kB
2.3、master3拉取镜像
[root@k8s-master3 keepalived]# kubeadm config images pull --config /root/new.yaml
[root@k8s-master3 keepalived]# docker image ls
REPOSITORY                                                                    TAG        IMAGE ID       CREATED         SIZE
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy                v1.20.0    10cc881966cf   11 months ago   118MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager   v1.20.0    b9fa1895dcaa   11 months ago   116MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler            v1.20.0    3138b6e3d471   11 months ago   46.4MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver            v1.20.0    ca9843d3b545   11 months ago   122MB
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd                      3.4.13-0   0369cf4303ff   14 months ago   253MB
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns                   1.7.0      bfe3a36ebd25   16 months ago   45.2MB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause                     3.2        80d28bedfe5d   20 months ago   683kB

3、所有节点自启动kubelet

# systemctl enable --now kubelet

4、master1进行初始化

4.1、master1节点进行初始化

🔺**注意:**Master1节点初始化,初始化以后会在 /etc/kubernetes 目录下生成对应的证书和配置文件,之后其他Master节点加入Master1即可:

[root@k8s-master1 ~]# kubeadm init --config /root/new.yaml  --upload-certs
.........
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:
#master节点加入使用的命令,记录!
  kubeadm join 192.168.178.200:16443 --token 7t2weq.bjbawausm0jaxury \
    --discovery-token-ca-cert-hash sha256:e76e4525ca29a9ccd5c24142a724bdb6ab86512420215242c4313fb830a4eb98 \
    --control-plane --certificate-key 0f2a7ff2c46ec172f834e237fcca8a02e7c29500746594c25d995b78c92dde96

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:
#node节点加入使用的命令。记录!
kubeadm join 192.168.178.200:16443 --token 7t2weq.bjbawausm0jaxury \
    --discovery-token-ca-cert-hash sha256:e76e4525ca29a9ccd5c24142a724bdb6ab86512420215242c4313fb830a4eb98
4.1.1、初始化失败,进行的操作
# kubeadm reset -f
# ipvsadm --clear 
# rm -rf ~/.kube
# 再次进行初始化
4.2、master1节点配置环境变量
cat <<EOF >> /root/.bashrc
export KUBECONFIG=/etc/kubernetes/admin.conf
EOF
[root@k8s-master1 ~]# source ~/.bashrc 

[root@k8s-master1 ~]# kubectl get nodes
NAME           STATUS     ROLES                  AGE     VERSION
k8s-master01   NotReady   control-plane,master   4m59s   v1.20.12

[root@k8s-master1 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   8m59s

[root@k8s-master1 ~]# kubectl get pod -n kube-system
NAME                                   READY   STATUS    RESTARTS   AGE
coredns-54d67798b7-7d676               0/1     Pending   0          12m
coredns-54d67798b7-mkrmq               0/1     Pending   0          12m
etcd-k8s-master01                      1/1     Running   0          12m
kube-apiserver-k8s-master01            1/1     Running   0          12m
kube-controller-manager-k8s-master01   1/1     Running   0          12m
kube-proxy-tbl27                       1/1     Running   0          12m
kube-scheduler-k8s-master01            1/1     Running   0          12m
4.3、kubelet命令补全
[root@k8s-master1 ~]# source <(kubectl completion bash)
[root@k8s-master1 ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc
# 退出当前shell生效,重登
[root@k8s-master1 ~]# exit

5、所有节点加入集群

5.1、master节点加入集群
#使用上方的master1初始化成功时进行记录的命令:
kubeadm join 192.168.178.200:16443 --token 7t2weq.bjbawausm0jaxury \
    --discovery-token-ca-cert-hash sha256:e76e4525ca29a9ccd5c24142a724bdb6ab86512420215242c4313fb830a4eb98 \
    --control-plane --certificate-key 0f2a7ff2c46ec172f834e237fcca8a02e7c29500746594c25d995b78c92dde96
    
.........
This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

	mkdir -p $HOME/.kube
	sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

5.2、node节点加入集群
#使用上方的master1初始化成功时进行记录的命令:
kubeadm join 192.168.178.200:16443 --token 7t2weq.bjbawausm0jaxury \
    --discovery-token-ca-cert-hash sha256:e76e4525ca29a9ccd5c24142a724bdb6ab86512420215242c4313fb830a4eb98

.........
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
5.3、master1再次查看集群信息
[root@k8s-master1 ~]# kubectl get nodes 
NAME           STATUS     ROLES                  AGE     VERSION
k8s-master01   NotReady   control-plane,master   25m     v1.20.12
k8s-master2    NotReady   control-plane,master   3m51s   v1.20.12
k8s-master3    NotReady   control-plane,master   2m52s   v1.20.12
k8s-node1      NotReady   <none>                 102s    v1.20.12
k8s-node2      NotReady   <none>                 98s     v1.20.12

六、master1 网络插件 Calico 部署

所有节点

registry.cn-beijing.aliyuncs.com/dotbalo/cni
registry.cn-beijing.aliyuncs.com/dotbalo/pod2daemon-flexvol
registry.cn-beijing.aliyuncs.com/dotbalo/node
registry.cn-beijing.aliyuncs.com/dotbalo/kube-controllers #只有该镜像只在一个节点 replicas:1

1、下载所需的源码文件

**下载地址:**https://codeload.github.com/dotbalo/k8s-ha-install/zip/refs/heads/manual-installation-v1.20.x
在这里插入图片描述

[root@k8s-master1 ~]# mkdir /root/git ; cd /root/git/
将下载的压缩包上传该目录
[root@k8s-master1 git]# yum -y install lrzsz unzip
[root@k8s-master1 git]# unzip k8s-ha-install-manual-installation-v1.20.x.zip 
[root@k8s-master1 git]# ls
k8s-ha-install-manual-installation-v1.20.x

2、修改yaml配置文件

2.1、修改etcd_endpoints为master节点IP
[root@k8s-master1 git]# cd k8s-ha-install-manual-installation-v1.20.x/calico/

[root@k8s-master1 calico]# sed -i 's#etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"#etcd_endpoints: "https://192.168.178.51:2379,https://192.168.178.52:2379,https://192.168.178.53:2379"#g' calico-etcd.yaml
2.2、将etcd的CA证书进行加密、赋值

解析:

  • 先将 ca.crt 证书进行 base64 加密,去掉换行符,赋值给 ETCD_CA
  • 先将 server.crt 证书进行 base64 加密,去掉换行符,赋值给 ETCD_CERT
  • 先将 server.key 证书进行 base64 加密,去掉换行符,赋值给 ETCD_KEY
[root@k8s-master1 calico]# ETCD_CA=`cat /etc/kubernetes/pki/etcd/ca.crt | base64 | tr -d '\n'`
[root@k8s-master1 calico]# ETCD_CERT=`cat /etc/kubernetes/pki/etcd/server.crt | base64 | tr -d '\n'`

[root@k8s-master1 calico]# ETCD_KEY=`cat /etc/kubernetes/pki/etcd/server.key | base64 | tr -d '\n'`
2.3、将值对内容进行替换

解析:

对 calico-etcd.yaml 文件进行操作

  • 将内容中的 # etcd-key: null 替换为 etcd-key: ${ETCD_KEY}

  • 将内容中的 # etcd-cert: null 替换为 etcd-cert: ${ETCD_CERT}

  • 将内容中的 # etcd-ca: null 替换为 etcd-ca: ${ETCD_CA}@g

  • 将内容中的 etcd_ca: “” 替换为 etcd_ca: “/calico-secrets/etcd-ca”

  • 将内容中的 etcd_cert: “” 替换为 etcd_cert: “/calico-secrets/etcd-cert”

  • 将内容中的 etcd_key: “” 替换为 etcd_key: “/calico-secrets/etcd-key”

[root@k8s-master1 calico]# sed -i "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g; s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g; s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" calico-etcd.yaml

[root@k8s-master1 calico]# sed -i 's#etcd_ca: ""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert: "/calico-secrets/etcd-cert"#g; s#etcd_key: "" #etcd_key: "/calico-secrets/etcd-key" #g' calico-etcd.yaml
2.4、定义Calico的网段
[root@k8s-master1 ~]# cat /etc/kubernetes/manifests/kube-controller-manager.yaml | grep cluster-cidr= | awk -F= '{print $NF}'
172.168.0.0/12

[root@k8s-master1 calico]# POD_SUBNET=`cat /etc/kubernetes/manifests/kube-controller-manager.yaml | grep cluster-cidr= | awk -F= '{print $NF}'`

[root@k8s-master1 calico]# sed -i 's@# - name: CALICO_IPV4POOL_CIDR@- name: CALICO_IPV4POOL_CIDR@g; s@#   value: "192.168.0.0/16"@  value: '"${POD_SUBNET}"'@g' calico-etcd.yaml


[root@k8s-master1 calico]# cat calico-etcd.yaml | grep -A1 'CALICO_IPV4POOL_CIDR'
            - name: CALICO_IPV4POOL_CIDR
              value: 172.168.0.0/12
              #之前master1初始化集群是设置的Pod的网段

3、执行配置文件

[root@k8s-master1 calico]# kubectl apply -f calico-etcd.yaml 
secret/calico-etcd-secrets created
configmap/calico-config created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created

[root@k8s-master1 calico]# kubectl get pod -n kube-system 
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-5f6d4b864b-5tfck   1/1     Running   0          2m6s
calico-node-8zn7h                          1/1     Running   0          2m6s
calico-node-btngk                          1/1     Running   0          2m6s
calico-node-j6847                          1/1     Running   0          2m6s
calico-node-k86mh                          1/1     Running   0          2m6s
calico-node-v78kq                          1/1     Running   0          2m6s
coredns-54d67798b7-7d676                   1/1     Running   0          73m
coredns-54d67798b7-mkrmq                   1/1     Running   0          73m
etcd-k8s-master01                          1/1     Running   2          73m
etcd-k8s-master2                           1/1     Running   0          51m
etcd-k8s-master3                           1/1     Running   0          49m
kube-apiserver-k8s-master01                1/1     Running   2          73m
kube-apiserver-k8s-master2                 1/1     Running   0          51m
kube-apiserver-k8s-master3                 1/1     Running   1          49m
kube-controller-manager-k8s-master01       1/1     Running   3          73m
kube-controller-manager-k8s-master2        1/1     Running   0          51m
kube-controller-manager-k8s-master3        1/1     Running   0          50m
kube-proxy-87g9m                           1/1     Running   0          50m
kube-proxy-fwj5h                           1/1     Running   0          49m
kube-proxy-rl6r5                           1/1     Running   0          51m
kube-proxy-tbl27                           1/1     Running   2          73m
kube-proxy-w7pz2                           1/1     Running   0          49m
kube-scheduler-k8s-master01                1/1     Running   3          73m
kube-scheduler-k8s-master2                 1/1     Running   0          51m
kube-scheduler-k8s-master3                 1/1     Running   0          49m

[root@k8s-master1 calico]# kubectl get nodes
NAME           STATUS   ROLES                  AGE   VERSION
k8s-master01   Ready    control-plane,master   75m   v1.20.12
k8s-master2    Ready    control-plane,master   54m   v1.20.12
k8s-master3    Ready    control-plane,master   53m   v1.20.12
k8s-node1      Ready    <none>                 51m   v1.20.12
k8s-node2      Ready    <none>                 51m   v1.20.12

4、calico网络测试

**解析:**测试Calico部署的网络是否有用,创建多个Pod,Pod之间的通信,与Svc之间的通信等。

[root@k8s-master1 ~]# vim nginx-dp.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx-deploy
  name: nginx-deploy
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-deploy
  template:
    metadata:
      labels:
        app: nginx-deploy
    spec:
      containers:
      - image: daocloud.io/library/nginx:latest
        name: nginx

---
apiVersion: v1
kind: Service
metadata:
  name: nginx-svc

spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    targetPort: 80
    nodePort: 30080
  selector:
    app: nginx-deploy
    
[root@k8s-master1 ~]# kubectl apply -f nginx-dp.yaml 
deployment.apps/nginx-deploy created
service/nginx-svc created

[root@k8s-master1 ~]# kubectl get pod -o wide
NAME                           READY   STATUS    RESTARTS   AGE   IP           NODE   
nginx-deploy-76bcbdfc49-vlzzd   1/1     Running   0      52s   172.173.131.7    k8s-node2  
nginx-deploy-76bcbdfc49-whkxh   1/1     Running   0      52s   172.175.156.68   k8s-node1  

[root@k8s-master1 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        13h
nginx-svc    NodePort    10.109.140.116   <none>        80:30080/TCP   68s
1、使用Pod-IP访问:
[root@k8s-master1 ~]# curl -I 172.173.131.7
HTTP/1.1 200 OK
Server: nginx/1.19.6
Date: Fri, 05 Nov 2021 04:16:16 GMT

[root@k8s-master1 ~]# curl -I 172.175.156.68
HTTP/1.1 200 OK
Server: nginx/1.19.6
Date: Fri, 05 Nov 2021 04:16:27 GMT

2、使用的clustr-IP访问:
[root@k8s-master1 ~]# curl -I 10.109.140.116:80
HTTP/1.1 200 OK
Server: nginx/1.19.6
Date: Fri, 05 Nov 2021 04:17:12 GMT

3、使用NodePort-IP访问:
[root@k8s-master1 ~]# curl -I 192.168.178.54:30080
HTTP/1.1 200 OK
Server: nginx/1.19.6
Date: Fri, 05 Nov 2021 04:18:22 GMT
[root@k8s-master1 ~]# curl -I 192.168.178.55:30080
HTTP/1.1 200 OK
Server: nginx/1.19.6
Date: Fri, 05 Nov 2021 04:19:11 GMT

4、使用容器中的Pod访问:
[root@k8s-master1 ~]# kubectl exec -it  nginx-deploy-76bcbdfc49-vlzzd -- /bin/sh
# curl -I 172.175.156.68
HTTP/1.1 200 OK
Server: nginx/1.19.6
Date: Fri, 05 Nov 2021 04:22:03 GMT

# curl -I 10.109.140.116
HTTP/1.1 200 OK
Server: nginx/1.19.6
Date: Fri, 05 Nov 2021 04:23:42 GMT

七、Master1 部署 Metrics

你可以通过 节点选择 去部署该组件,只在一个节点部署即可

registry.cn-beijing.aliyuncs.com/dotbalo/metrics-server:v0.4.1

**解析:**Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率。

1、Master1节点操作

解析:将Master01节点的front-proxy-ca.crt复制到所有Node节点

[root@k8s-master1 ~]# scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-node1:/etc/kubernetes/pki/front-proxy-ca.crt

[root@k8s-master1 ~]# scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-node2:/etc/kubernetes/pki/front-proxy-ca.crt

2、master1节点apply

[root@k8s-master1 ~]# cd /root/git/k8s-ha-install-manual-installation-v1.20.x/metrics-server-0.4.x-kubeadm/

[root@k8s-master1 metrics-server-0.4.x-kubeadm]# ls
comp.yaml

[root@k8s-master1 metrics-server-0.4.x-kubeadm]# kubectl apply -f comp.yaml 
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created

3、查看效果

[root@k8s-master1 ~]# kubectl top node 
NAME           CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8s-master01   123m         6%     1000Mi          34%       
k8s-master2    122m         6%     1438Mi          50%       
k8s-master3    125m         6%     1369Mi          47%       
k8s-node1      54m          2%     813Mi           40%       
k8s-node2      64m          3%     876Mi           43%

[root@k8s-master1 ~]# kubectl top pod -n kube-system         

八、Master1 部署 Dashboard

你可以通过 节点选择 去部署该组件,只在一个节点部署即可

registry.cn-beijing.aliyuncs.com/dotbalo/dashboard:v2.0.4
registry.cn-beijing.aliyuncs.com/dotbalo/metrics-scraper:v1.0.4

1、指定版本安装(本实验)

[root@k8s-master1 ~]# cd /root/git/k8s-ha-install-manual-installation-v1.20.x/dashboard/

[root@k8s-master1 dashboard]# ls
dashboard-user.yaml  dashboard.yaml

[root@k8s-master1 dashboard]# kubectl apply -f ./
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

[root@k8s-master1 dashboard]# kubectl get svc -n kubernetes-dashboard 
NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
dashboard-metrics-scraper   ClusterIP   10.109.202.201   <none>        8000/TCP   5m29s
kubernetes-dashboard        ClusterIP   10.96.194.27     <none>        443/TCP    5m29s

2、安装最新版本(二选一)

🥇**官方GitHub地址:**https://github.com/kubernetes/dashboard

在这里插入图片描述

2.1、官网指定命令执行
# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml
2.2、创建管理员用户
# vim admin.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding 
metadata: 
  name: admin-user
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
  
# kubectl apply -f admin.yaml -n kube-system

3、登入Web页面

3.1、更改dashboard的svc为NodePort
[root@k8s-master1 dashboard]# kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
#找到并修改
spec:
  clusterIP: 10.96.194.27
  clusterIPs:
  - 10.96.194.27
  ports:
  - port: 443
    protocol: TCP
    targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
  sessionAffinity: None
  type: NodePort	#找到此处进行修改
3.2、再次查看svc
[root@k8s-master1 dashboard]# kubectl get svc -n kubernetes-dashboard 
NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.109.202.201   <none>        8000/TCP        7m40s
kubernetes-dashboard        NodePort    10.96.194.27     <none>        443:32661/TCP   7m40s
3.3、访问Web页面

关闭该提示,注意 此为 谷歌浏览器 Google

在这里插入图片描述

参数:–test-type --ignore-certificate-errors

在这里插入图片描述

重启浏览器,再次访问,没有提示
在这里插入图片描述

3.4、获取token
[root@k8s-master1 dashboard]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Name:         admin-user-token-7pm6f
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: f52de3a4-31b0-4ffa-8851-7dba77a8b029

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1066 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IlA1a3A5M2pydjdFZGlycTFjaFhfRUdXLVBfNnNzNGQ2UGdNdjN4NW9RTkkifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTdwbTZmIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJmNTJkZTNhNC0zMWIwLTRmZmEtODg1MS03ZGJhNzdhOGIwMjkiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.SwF42kujMIzyHtGWSitDXASeSqgZ5PL2PJjdq-cIe-2hYAh9SsYsXVFRWuka6jG66OeNG71IZ7huKe8_sfv5cQ_v_j0ilMiP0aTo3YQhdU-PKKWIsfGwI7BfiRtk9JzWQlDYJCPM8Nn_lFZDLCMoeJ_FeQG8zBxoAkgL1nvAaJICPQbs_A-PwwMTmuUurttiu9CF6HOoiWlCxCMWLEGXIwGl43Wc4evP3qpuBYcsrymeC91LSd4Szn-uqb5_aS6JKKYtTQb2_nQ8LjECCvMyH6tPbqW_pPSi8FNfqhSUtMV3ls93Ilmm8X6ZJNjCkBxcxN89ED1RKcrJBrGgL2aE_A

在这里插入图片描述

3.5、登入Web页面查看

在这里插入图片描述在这里插入图片描述

九、kube-proxy开启 ipvs

🔺**注意:**在后期实验中发现,要实现Pod内部与Service之间的通信,就是我们可以通过 ServiceName.Namespace.svc.cluster.local 访问到Service,必须把kube-proxy的模式修改为 IPVS 模式,才能进行通信。

1、Mastrer1 修改模式

[root@k8s-master1 ~]# kubectl edit cm kube-proxy -n kube-system
    ipvs:
      excludeCIDRs: null
      minSyncPeriod: 0s
      scheduler: ""
      strictARP: false
      syncPeriod: 0s
      tcpFinTimeout: 0s
      tcpTimeout: 0s
      udpTimeout: 0s
    kind: KubeProxyConfiguration
    metricsBindAddress: ""
    mode: "ipvs"	#添加此处 ipvs 即可,保存退出

2、更新kube-proxy

[root@k8s-master1 ~]# kubectl patch daemonset kube-proxy -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}" -n kube-system

3、所有节点进行验证

[root@k8s-master1 ~]# curl 127.0.0.1:10249/proxyMode
ipvs
[root@k8s-master2 ~]# curl 127.0.0.1:10249/proxyMode
ipvs
[root@k8s-master3 keepalived]# curl 127.0.0.1:10249/proxyMode
ipvs
[root@k8s-node1 ~]# curl 127.0.0.1:10249/proxyMode
ipvs
[root@k8s-node2 ~]# curl 127.0.0.1:10249/proxyMode
ipvs

十、学会查看官方文档 -> kubectl备忘录

🥇**官网地址:**https://kubernetes.io/

🥇**官方文档:**https://kubernetes.io/zh/docs/reference/kubectl/
在这里插入图片描述

十一、测试环境节省资源,驱离Node节点

🗡**注意:**进行驱离Node节点节省资源,驱离完成后,需要在master上部署Pod,但master节点上有污点,参考下方的 "问题解决 " 中的 < 2、master节点 无法部署非系统Pod >

1、驱离 k8s-node1

[root@k8s-master1 ~]# kubectl drain k8s-node2 --delete-local-data --ignore-daemonsets --force 
Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
node/k8s-node2 already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-8zn7h, kube-system/kube-proxy-6t2b8
evicting pod kubernetes-dashboard/dashboard-metrics-scraper-7645f69d8c-78x8r
evicting pod default/nginx-deploy-7d5f68dc69-smjtv
evicting pod kube-system/coredns-54d67798b7-mkrmq
evicting pod kube-system/coredns-54d67798b7-7d676
evicting pod kube-system/metrics-server-545b8b99c6-d2ldd
pod/dashboard-metrics-scraper-7645f69d8c-78x8r evicted
pod/metrics-server-545b8b99c6-d2ldd evicted
pod/nginx-deploy-7d5f68dc69-smjtv evicted
pod/coredns-54d67798b7-mkrmq evicted
pod/coredns-54d67798b7-7d676 evicted
node/k8s-node2 evicted

[root@k8s-master1 ~]# kubectl get nodes
NAME           STATUS                     ROLES                  AGE   VERSION
k8s-master01   Ready                      control-plane,master   37h   v1.20.12
k8s-master2    Ready                      control-plane,master   37h   v1.20.12
k8s-master3    Ready                      control-plane,master   37h   v1.20.12
k8s-node1      Ready                      <none>                 37h   v1.20.12
k8s-node2      Ready,SchedulingDisabled   <none>                 37h   v1.20.12

[root@k8s-master1 ~]# kubectl delete nodes k8s-node2 
node "k8s-node2" deleted

[root@k8s-master1 ~]# kubectl get nodes
NAME           STATUS   ROLES                  AGE   VERSION
k8s-master01   Ready    control-plane,master   37h   v1.20.12
k8s-master2    Ready    control-plane,master   37h   v1.20.12
k8s-master3    Ready    control-plane,master   37h   v1.20.12
k8s-node1      Ready    <none>                 37h   v1.20.12


2、驱离k8s-node2

[root@k8s-master1 ~]# kubectl drain k8s-node1 --delete-local-data --ignore-daemonsets --force 
Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
node/k8s-node1 cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-k86mh, kube-system/kube-proxy-rswpm
evicting pod kubernetes-dashboard/kubernetes-dashboard-78cb679857-gvfx2
evicting pod default/hello-f9jr6
evicting pod default/nginx-deploy-7d5f68dc69-v6s8p
evicting pod kube-system/calico-kube-controllers-5f6d4b864b-5tfck
evicting pod kube-system/metrics-server-545b8b99c6-jnc9r
evicting pod default/nginx-deploy-7d5f68dc69-xtrdr
pod/hello-f9jr6 evicted
pod/nginx-deploy-7d5f68dc69-xtrdr evicted
pod/metrics-server-545b8b99c6-jnc9r evicted
pod/kubernetes-dashboard-78cb679857-gvfx2 evicted
pod/nginx-deploy-7d5f68dc69-v6s8p evicted
pod/calico-kube-controllers-5f6d4b864b-5tfck evicted
node/k8s-node1 evicted

[root@k8s-master1 ~]# kubectl delete nodes k8s-node1 
node "k8s-node1" deleted

[root@k8s-master1 ~]# kubectl get nodes
NAME           STATUS   ROLES                  AGE   VERSION
k8s-master01   Ready    control-plane,master   37h   v1.20.12
k8s-master2    Ready    control-plane,master   37h   v1.20.12
k8s-master3    Ready    control-plane,master   37h   v1.20.12

3、测试查看

**解析:**可以编写一个简单的 yaml 文件进行测试,查看 Pod部署的主机

[root@k8s-master1 ~]# vim nginx-dp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx-deploy
  name: nginx-deploy
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-deploy
  template:
    metadata:
      labels:
        app: nginx-deploy
    spec:
      containers:
      - image: daocloud.io/library/nginx:latest
        imagePullPolicy: IfNotPresent
        name: nginx

---
apiVersion: v1
kind: Service
metadata:
  name: nginx-svc

spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    targetPort: 80
    nodePort: 80
  selector:
    app: nginx-deploy

[root@k8s-master1 ~]# kubectl get pod -o wide 
NAME                          READY   STATUS    RESTARTS   AGE   IP         NODE         
nginx-deploy-7d5f68dc69-hjnhq   1/1     Running   0   28s   172.171.95.195    k8s-master3 
nginx-deploy-7d5f68dc69-nvj4b   1/1     Running   0   22s   172.170.159.132   k8s-master01 

十二、问题解决

1、加入集群的 Token 过期

🔺**注意:**Token值在集群初始化后,有效期为 24小时 ,过了24小时过期。进行重新生成Token,再次加入集群,新生成的Token为 2小时。

1.1、生成Node节点加入集群的 Token
[root@k8s-master1 ~]# kubeadm token create --print-join-command
kubeadm join 192.168.178.200:16443 --token menw99.1hbsurvl5fiz119n     --discovery-token-ca-cert-hash sha256:e76e4525ca29a9ccd5c24142a724bdb6ab865	12420215242c4313fb830a4eb98
1.2、生成Master节点加入集群的 --certificate-key
[root@k8s-master1 ~]# kubeadm init phase upload-certs  --upload-certs
I1105 12:33:08.201601   93226 version.go:254] remote version is much newer: v1.22.3; falling back to: stable-1.20
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
38dba94af7a38700c3698b8acdf8e23f273be07877f5c86f4977dc023e333deb

#master节点加入集群的命令
kubeadm join 192.168.178.200:16443 --token menw99.1hbsurvl5fiz119n     --discovery-token-ca-cert-hash sha256:e76e4525ca29a9ccd5c24142a724bdb6ab86512420215242c4313fb830a4eb98 \
 --control-plane --certificate-key 38dba94af7a38700c3698b8acdf8e23f273be07877f5c86f4977dc023e333deb

2、master节点 无法部署非系统Pod

**解析:**主要是因为master节点被加上污点,污点是不允许部署非系统 Pod,在 测试 环境,可以将污点去除,节省资源,可利用率。

2.1、查看污点
[root@k8s-master1 ~]# kubectl  describe node -l node-role.kubernetes.io/master=  | grep Taints
Taints:             node-role.kubernetes.io/master:NoSchedule
Taints:             node-role.kubernetes.io/master:NoSchedule
Taints:             node-role.kubernetes.io/master:NoSchedule
2.2、取消污点
[root@k8s-master1 ~]# kubectl  taint node  -l node-role.kubernetes.io/master node-role.kubernetes.io/master:NoSchedule-
node/k8s-master01 untainted
node/k8s-master02 untainted
node/k8s-master03 untainted

[root@k8s-master1 ~]# kubectl  describe node -l node-role.kubernetes.io/master=  | grep Taints
Taints:             <none>
Taints:             <none>
Taints:             <none>

3、修改NodePort的默认端口

原理: 默认k8s的使用端口的范围为30000左右,作为对外部提供的端口。我们也可以通过对配置文件的修改去指定默认的对外端口的范围。

#报错
The Service "nginx-svc" is invalid: spec.ports[0].nodePort: Invalid value: 80: provided port is not in the valid range. The range of valid ports is 30000-32767


[root@k8s-master1 ~]# vim /etc/kubernetes/manifests/kube-apiserver.yaml
- --service-cluster-ip-range=10.96.0.0/12
- --service-node-port-range=1-65535    #找到后进行添加即可

#无需重启,k8s会自动生效
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐