1.环境说明

此方法用于生产环境
高可用工具:keepalived
负载均衡:haproxy (或者nginx,lvs都行)
docker和kubeadm、kubelet程序版本如图:
在这里插入图片描述
在这里插入图片描述
虚拟机:
在这里插入图片描述

2.主流程步骤

系统环境配置—》内核升级为4.19以上—》docker安装-----》kubeadm安装-----》高可用组件keepalived安装 —》haproxy安装配置-----》kubeadm初始化----》添加master或者node节点

3.详细步骤

全部节点都需要操作一直到初始化
步骤一:更改host配置文件:

cat /etc/hosts
 
 127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.247.100 k8s-master1
192.168.247.101 k8s-master2
192.168.247.103 k8s-node1
192.168.247.90  k8s-master-vip

步骤二:所有节点关闭防火墙、selinux、swap等:

systemctl disable --now firewalld
systemctl disable --now NetworkManager

#selinux   /etc/sysconfig/selinux
SELINUX=disabled

swapoff -a && sysctl -w vm.swappiness=0
#/etc/fstab 注释掉swap分区

步骤三:所有主机配置ntp时间同步:

yum install -y ntp

服务端master1上配置:

driftfile /var/lib/ntp/drift

restrict default nomodify notrap nopeer noquery  #nomodify是不允许修改服务端时间 notrap是不允许登入 noquery 不提供客户端时间查询 nopeer 用于阻止主机尝试与服务器对等
#默认是拒绝所有,通过restrict来控制可连接的客户端

restrict 127.0.0.1 #允许本机任何操作
restrict ::1

restrict 192.168.247.0 mask 255.255.255.0 nomodify notrap  #这里要改成自己的虚拟机所属网段
#允许192.168.247.0网段的主机可操作 除修改服务端时间、登入外的操作

server 210.72.145.44 prefer  #国家授时中心 
server 0.cn.pool.ntp.org    
server 1.cn.pool.ntp.org

server 127.127.1.0   #本地时间

restrict 0.cn.pool.ntp.org nomodify notrap noquery
restrict 1.cn.pool.ntp.org nomodify notrap noquery
restrict 210.72.145.44 nomodify notrap noquery

fudge 127.127.1.0 stratum 10    

includefile /etc/ntp/crypto/pw

keys /etc/ntp/keys

disable monitor

master2、node1配置:

driftfile /var/lib/ntp/drift

restrict default nomodify notrap nopeer noquery

restrict 127.0.0.1 
restrict ::1
restrict 192.168.247.100 nomodify notrap noquery

server 192.168.247.100

includefile /etc/ntp/crypto/pw

keys /etc/ntp/keys

disable monitor

然后开启ntp服务,并设置开机启动:

systemctl enable --now ntpd

所有节点修改limit:

vim /etc/security/limits.conf
# 末尾添加如下内容
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited

master1配置ssh免密登入其他节点,其他节点可用可不用:

ssh-keygen -t rsa
for i in k8s-master1 k8s-master2 k8s-node1;do ssh-copy-id -i .ssh/id_rsa.pub $i;done

步骤4:更换repo源,可以选择163镜像源
在这里插入图片描述
步骤5:添加kubernetes repo源,执行:

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

步骤6:安装所需要的基础软件:

yum install -y yum-utils device-mapper-persistent-data lvm2
yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 -y

步骤6:重启后,开始升级内核,内核rpm包下载网盘下载:链接:

https://pan.baidu.com/s/14Pg_LllldqTrLZaAlbqP3A
提取码:5i2e

在这里插入图片描述
升级内核同样是全部机器需要操作,把这两个文件传到全部节点上。

安装内核:

cd /root && yum localinstall -y kernel-ml*

修改内核启动顺序,执行:

grub2-set-default  0 && grub2-mkconfig -o /etc/grub2.cfg

grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"

重启,查看内核版本:

grubby --default-kernel
uname -r

步骤7:安装ipvsadm及模块

yum install ipvsadm ipset sysstat conntrack libseccomp -y

分别执行:

modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack

执行:

vim /etc/modules-load.d/ipvs.conf 
	# 加入以下内容
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip

重启服务:

systemctl enable --now systemd-modules-load.service

开启k8s用到的内核参数:

cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720

net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
sysctl --system

再次重启。
检查模块是否加载:

lsmod | grep --color=auto -e ip_vs -e nf_conntrack

在这里插入图片描述
在这里插入图片描述
步骤8:安装docker
添加docker源repo:

yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo

直接yum安装,默认最新版:

yum install -y docker-ce docker-ce-cli containerd.io

这里有个内核驱动问题,kubelet现在用的是systemd,所以必须得把docker的驱动改成systemd。
查看docker驱动:

docker info |grep Cgroup 

修改:

cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

设置开启启动,并启动docker:

systemctl daemon-reload && systemctl enable --now docker

步骤9:安装k8s
安装比较简单,前面配置了kuberneter仓库之后,直接yum即可:

yum install -y kubeadm

#安装kubeadm的同时会把kubelet、kubectl等包自动安装

更改k8s默认拉取的仓库地址,改为国内地址:

cat >/etc/sysconfig/kubelet<<EOF
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2"
EOF

设置Kubelet开机自启动:

systemctl daemon-reload
systemctl enable --now kubelet

步骤10:keepalived安装配置

yum install -y keepalived

master1配置:
/etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {
   router_id 1  #唯一标识
}

vrrp_script chk_apiserver {
   script "/etc/keepalived/check_apiserver.sh"
   interval 5
   
}

vrrp_instance VI_1 {
    state MASTER  #主节点
    interface ens33  #这个地方更改成你de网卡名称 
    virtual_router_id 51
    priority 100   #权重值越大优先
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.247.90
    
     }
    track_script {
        chk_apiserver

     } 
}

master2:
vim /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {
   router_id 1  
}

vrrp_script chk_apiserver {
   script "/etc/keepalived/check_apiserver.sh"
   interval 5
   
}

vrrp_instance VI_1 {
    state BACKUP   #修改这里为backup
    interface ens33
    virtual_router_id 51
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.247.90
    
     }
    track_script {
        chk_apiserver

     }
    
}

master1和2都需要配置监测脚本chk_apiserver:
/etc/keepalived/check_apiserver.sh

 #!/bin/bash

err=0
for k in $(seq 1 3)
do
    check_code=$(pgrep haproxy)
    if [[ $check_code == "" ]]; then
        err=$(expr $err + 1)
        sleep 1
        continue
    else
        err=0
        break
    fi
done

if [[ $err != "0" ]]; then
    echo "systemctl stop keepalived"
    /usr/bin/systemctl stop keepalived
    exit 1
else
    exit 0
fi

赋予执行权限:

chmod +x /etc/keepalived/check_apiserver.sh

设置开启启动,并启动服务:

systemctl enable --now keepalived

步骤11:部署haproxy
master1和master2部署的文件内容一样

yum install -y haproxy

配置文件/etc/haproxy/haproxy.cfg :

global
  maxconn  2000
  ulimit-n  16384
  log  127.0.0.1 local0 err
  stats timeout 30s

defaults
  log global
  mode  http
  option  httplog
  timeout connect 5000
  timeout client  50000
  timeout server  50000
  timeout http-request 15s
  timeout http-keep-alive 15s

frontend monitor-in
  bind *:33305
  mode http
  option httplog
  monitor-uri /monitor

frontend k8s-master
  bind 0.0.0.0:16443
  bind 127.0.0.1:16443
  mode tcp
  option tcplog
  tcp-request inspect-delay 5s
  default_backend k8s-master

backend k8s-master
  mode tcp
  option tcplog
  option tcp-check
  balance roundrobin
  default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
  server k8s-master1	192.168.247.100:6443  check
  server k8s-master2	192.168.247.101:6443  check

通过主机16443端口映射到后端192.168.247.100和101的6443,就是apiserver的端口。

步骤12:集群初始化
这里主要是先给一台master初始化成功后,将另一台maser join进去,node也通过join添加进去。不用全都执行init初始化。我原来想的是每台maser都执行init初始化,后端代理好像也能行,但是node节点也需要每台都需要都添加一遍,非常麻烦。k8s有特有的添加方式,join进去就可以成为master,并且有node节点的信息。
编写yaml文件:
vim new.yaml

apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: 7t2weq.bjbawausm0jaxury
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.247.100   #这里是本机ip
  bindPort: 6443   本机apiserver端口
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  imagePullPolicy: IfNotPresent
  name: k8s-master1
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  certSANs:
  - 192.168.247.90   #这里是vip地址,keepalived生成的
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 192.168.247.90:16443   #用来做负载均衡的前端地址
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers   #镜像仓库地址
kind: ClusterConfiguration
kubernetesVersion: v1.23.1   #kubelet版本
networking:
  dnsDomain: cluster.local
  podSubnet: 172.168.0.0/12   #定义pod网段
  serviceSubnet: 10.96.0.0/12   #定义service网段
scheduler: {}

此文件要给每个服务器一份,虽然只有 192.168.247.100 初始化会用到,但是拉去镜像是每台机器都要做的,这个文件里面的镜像仓库就可以获取镜像。

如果这个配置文件过时了,可以更新它:

kubeadm config migrate --old-config new.yaml --new-config new2.yaml

下面我用new.yaml文件来对三台镜像获取所需镜像,其他服务器不需要更改这个yaml文件任何地方:

kubeadm config images pull --config /root/new.yaml

如果这个有报错–v5啥的,就是镜像地址访问不到,可用的仓库镜像地址还有两个可以试试:

daocloud.io/daocloud
registry.cn-hangzhou.aliyuncs.com/google_containers

#直接替换registry.aliyuncs.com/google_containers这个地址就行

拉去下来后,只有master1需要操作:
init初始化:

kubeadm init --config /root/new.yaml  --upload-certs

如果失败,可能有两个原因:yaml文件配置的不对,尤其是ip地址。还有就是docker和kubelet的驱动不一样。

成功后,照着它给出的命令巧就行了,它包含master的join的方法,还有node的添加方法。记得把join命令保存起来
在这里插入图片描述
master1执行:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

master1的初始化已经完成,接下来启动kubelet并添加开启启动,三台机器操作:

systemctl enable --now kubelet

步骤13:将master2、node1添加进集群
master2:将master1初始化后的提示命令输入

kubeadm join 192.168.247.90:16443 --token 7t2weq.bjbawausm0jaxury \
	--discovery-token-ca-cert-hash sha256:7e2075c5cccf6b2dc6a00b05f5b1520929307f3e1be29d5a97ca2784789c4363 \
	--control-plane --certificate-key 0adf0a50a83b02423a084983e7add740ea6420a6f72edebad35921940d2f5ffe

然后作为master节点,它也要执行:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

node1:

kubeadm join 192.168.247.90:16443 --token 7t2weq.bjbawausm0jaxury \
	--discovery-token-ca-cert-hash sha256:7e2075c5cccf6b2dc6a00b05f5b1520929307f3e1be29d5a97ca2784789c4363 

在master1上查看:

kubectl get node

在这里插入图片描述
在这里插入图片描述
上图可以看到,还有东西没有完成,集群dns不能起来,这是因为我们还没有添加calio网络。

步骤14:添加calio网络
只需master1操作:下载yaml文件:

curl https://docs.projectcalico.org/manifests/calico.yaml -O

替换里面的ip,换成我们集群的pod地址,就是new.yaml里面的:
取消注释,并改为我们集群的
在这里插入图片描述
master1加载calio.yaml文件:

kubectl apply -f calico.yaml

再来看:
在这里插入图片描述
集群搭建到此完成。

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐