前言

前面动手做了K8S环境的准备,今天开始正式安装K8S环境了

Runtime安装(Containerd)

k8s version 1.26
直接安装Containerd环境

通过安装docker的形式安装Containerd(所有节点)

本来是想手动安装Containerd环境的,但是考虑打后续可能会用到docker容器,这里就知道通过安装docker环境顺带将Containerd环境装上

yum install docker-ce-20.10.* docker-ce-cli-20.10.* -y

下面配置Containerd模块

cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

然后加载模块

modprobe -- overlay

modprobe -- br_netfilter

配置Containerd所需要的内核

cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

#加载内核
sysctl --system

所有节点配置Containerd的配置文件:

#创建containerd配置文件
mkdir -p /etc/containerd
#用默认配置覆盖原来的配置
containerd config default | tee /etc/containerd/config.toml

所有节点将Containerd的Cgroup改为Systemd

#这里文件里面,操作下面两步
vim /etc/containerd/config.toml

然后找到containerd.runtimes.runc.options,添加SystemdCgroup = true
所有节点将sandbox_image的Pause镜像改成符合自己版本的地址registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6

所有节点启动Containerd,并配置开机自启动:

systemctl daemon-reload
systemctl enable --now containerd

所有节点配置crictl客户端连接的运行时位置:

cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF

验证一下containerd环境

#查看containerd版本
ctr version
#查看镜像
ctr images ls
#查看容器
ctr container ls

安装kubernetes组件

所有节点安装1.26版本的kubeadm、kubelet、kubectl

yum install kubeadm-1.26* kubelet-1.26* kubectl-1.26* -y

如果选择的是Containerd作为的Runtime,需要更改Kubelet的配置使用Containerd作为Runtime:

cat >/etc/sysconfig/kubelet<<EOF
KUBELET_KUBEADM_ARGS="--container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock"
EOF

所有节点设置Kubelet开机自启动

systemctl daemon-reload
systemctl enable --now kubelet

高可用组件安装

上面安装了kubernetes环境和工具,下面开始安装高可用组件,公有云要用公有云自带的负载均衡,比如阿里云的SLB,腾讯云的ELB,用来替代haproxy和keepalived,因为公有云大部分都是不支持keepalived的。
所有Master节点操作:

yum install keepalived haproxy -y
#创建文件
mkdir /etc/haproxy
#编辑文件
vim /etc/haproxy/haproxy.cfg
#复制内容
global
  maxconn  2000
  ulimit-n  16384
  log  127.0.0.1 local0 err
  stats timeout 30s

defaults
  log global
  mode  http
  option  httplog
  timeout connect 5000
  timeout client  50000
  timeout server  50000
  timeout http-request 15s
  timeout http-keep-alive 15s

frontend monitor-in
  bind *:33305
  mode http
  option httplog
  monitor-uri /monitor

frontend k8s-master
  bind 0.0.0.0:16443
  bind 127.0.0.1:16443
  mode tcp
  option tcplog
  tcp-request inspect-delay 5s
  default_backend k8s-master

backend k8s-master
  mode tcp
  option tcplog
  option tcp-check
  balance roundrobin
  default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
  server k8s-master01	master01 ip:6443  check
  server k8s-master02	master02 ip:6443  check
  server k8s-master03	master03 ip:6443  check
#重启
systemctl restart haproxy
#查看haproxy状态是否正常
systemctl status haproxy

所有Master节点配置KeepAlived:

mkdir /etc/keepalived
#编辑配置文件
#master01
vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
script_user root
    enable_script_security
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5
    weight -5
    fall 2
rise 1
}
vrrp_instance VI_1 {
    state MASTER
    interface eno16780032  #自己的网卡
    mcast_src_ip 192.168.1.9  #自己的ip
    virtual_router_id 51
    priority 101
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.1.100
    }
    track_script {
       chk_apiserver
    }
}

#master02
! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
script_user root
    enable_script_security
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
   interval 5
    weight -5
    fall 2
rise 1
}
vrrp_instance VI_1 {
    state BACKUP
    interface eno16780032
    mcast_src_ip 192.168.1.6
    virtual_router_id 51
    priority 100
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.1.100
    }
    track_script {
       chk_apiserver
    }
}

#master03
! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
script_user root
    enable_script_security
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
 interval 5
    weight -5
    fall 2
rise 1
}
vrrp_instance VI_1 {
    state BACKUP
    interface eno16780032
    mcast_src_ip 192.168.1.4
    virtual_router_id 51
    priority 100
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.1.100
    }
    track_script {
       chk_apiserver
    }
}


所有master节点配置KeepAlived健康检查文件:

cat /etc/keepalived/check_apiserver.sh 
#!/bin/bash

err=0
for k in $(seq 1 3)
do
    check_code=$(pgrep haproxy)
    if [[ $check_code == "" ]]; then
        err=$(expr $err + 1)
        sleep 1
        continue
    else
        err=0
        break
    fi
done

if [[ $err != "0" ]]; then
    echo "systemctl stop keepalived"
    /usr/bin/systemctl stop keepalived
    exit 1
else
    exit 0
fi

#权限
chmod +x /etc/keepalived/check_apiserver.sh
#启动haproxy和keepalived
systemctl daemon-reload
systemctl enable --now haproxy
systemctl enable --now keepalived
#测试虚拟ip VIP
ping 192.168.1.100 -c 4
telnet 10.103.236.236 16443

结语:

安装runtime环境,从搭建到排错,到最后所有安装组件验证完成,最后再理解,不容易啊,下一章开始K8S集群初始化。

参考来源:

https://edu.51cto.com/course/23845.html

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐