一、Kubernetes集群节点准备

1.1 操作系统

操作系统版本
CentOS7.9

1.2 集群规划

主机名IP地址内存硬盘角色
k8s-master192.168.100.104G50Gmaster
k8s-worker-01192.168.100.114G50Gworker01
k8s-worker-02192.168.100.124G50Gworker02
软件版本
kubernetes1.29.x
docker26.1.4
cri-docker0.3.8
Calico3.24.1

1.3 主机配置

1.3.1 主机名配置

由于本次使用3台主机完成Kubernetes集群部署,其中1台为master节点名称为k8s-master,另外两台worker节点名称分别为k8s-worker-01和k8s-worker-02。

【master节点】

[root@localhost ~]# hostnamectl set-hostname k8s-master && bash

【worker01节点】

[root@localhost ~]# hostnamectl set-hostname k8s-worker-01 && bash

【worker02节点】

[root@localhost ~]# hostnamectl set-hostname k8s-worker-02 && bash

1.3.2 主机IP地址配置

【master节点】

[root@k8s-master ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static  #设置静态地址
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
UUID=9aec0c9c-663b-4f7f-b4eb-f0574e6b910e
DEVICE=ens33
ONBOOT=yes   #设置系统开机自启动
IPADDR=192.168.100.10   #设置主机IP地址
NETMASK=255.255.255.0   #设置子网掩码
GATEWAY=192.168.100.2   #设置网关
DNS1=192.168.100.2   #设置DNS

【worker01节点】

[root@k8s-worker-01 ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
UUID=9aec0c9c-663b-4f7f-b4eb-f0574e6b910e
DEVICE=ens33
ONBOOT=yes
IPADDR=192.168.100.11
NETMASK=255.255.255.0
GATEWAY=192.168.100.2
DNS1=192.168.100.2

【worker02节点】

[root@k8s-worker-02 ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
UUID=9aec0c9c-663b-4f7f-b4eb-f0574e6b910e
DEVICE=ens33
ONBOOT=yes
IPADDR=192.168.100.12
NETMASK=255.255.255.0
GATEWAY=192.168.100.2
DNS1=192.168.100.2

1.3.3 设置主机名映射解析

【master节点、worker01节点、worker02节点】

[root@k8s-master ~]# vi /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.100.10 k8s-master
192.168.100.11 k8s-worker-01
192.168.100.12 k8s-worker-02

1.3.4 防火墙配置

【master节点、worker01节点、worker02节点】

【关闭防火墙】
[root@k8s-master ~]#  systemctl disable firewalld && systemctl stop firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

【查看firewall状态】
[root@k8s-master ~]# firewall-cmd --state
not running

1.3.5 SELINUX配置

【master节点、worker01节点、worker02节点】

【关闭 SELINUX】
[root@k8s-master ~]# sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config && setenforce 0

【重启生效】
[root@k8s-master ~]# reboot

【查看selinux状态】
[root@k8s-master ~]# sestatus
SELinux status:                 disabled

1.3.6 时间同步设置

【master节点、worker01节点、worker02节点】

【删除原有yum源】
[root@k8s-master ~]# rm -rf /etc/yum.repos.d/*

【配置Centos-7的yum源】
[root@k8s-master ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
[root@k8s-master ~]# sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo

【进行缓存】
[root@k8s-master ~]# yum makecache

【安装chrony】
[root@k8s-master ~]# yum -y install chrony

【修改chrony服务配置文件】
[root@k8s-master ~]# sed -i '3,6s/^/#/g' /etc/chrony.conf
[root@k8s-master ~]# sed -i '7s/^/server ntp1.aliyun.com iburst/g' /etc/chrony.conf
[root@k8s-master ~]# echo "allow 192.168.100.0/24" >> /etc/chrony.conf
[root@k8s-master ~]# echo "local stratum 10" >> /etc/chrony.conf

【启动chronyd服务并设置开机自启动】
[root@k8s-master ~]# systemctl restart chronyd && systemctl enable chronyd

1.3.7 配置内核路由转发及网桥过滤

【master节点、worker01节点、worker02节点】

【添加网桥过滤器及内核转发配置文件】
[root@k8s-master ~]# cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
EOF

【使其生效】
[root@k8s-master ~]# sysctl --system

【加载br_netfilter模块】
[root@k8s-master ~]# modprobe br_netfilter

【查询是否加载】
[root@k8s-master ~]# lsmod | grep br_netfilter
br_netfilter           22256  0
bridge                151336  1 br_netfilter

1.3.8 配置ipvs转发

【master节点、worker01节点、worker02节点】

【配置ipvsadm模块加载方式】
[root@k8s-master ~]# mkdir -p /etc/sysconfig/ipvsadm
[root@k8s-master ~]# cat > /etc/sysconfig/ipvsadm/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF

【授权、运行、检查是否加载】
[root@k8s-master ~]# chmod 755 /etc/sysconfig/ipvsadm/ipvs.modules && bash /etc/sysconfig/ipvsadm/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack

1.3.9 关闭swap分区

【master节点、worker01节点、worker02节点】

【关闭swap分区并注释swap那一行】
[root@k8s-master ~]# swapoff -a && sed -i 's/.*swap.*/#&/g' /etc/fstab

二、安装Docker-ce和cri-dockerd

2.1 安装Docker

【master节点、worker01节点、worker02节点】

【安装必要的一些系统工具】
[root@k8s-master ~]# yum install -y yum-utils device-mapper-persistent-data lvm2 vim net-tools wget

【配置docker-ce的yum源】
[root@k8s-master ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@k8s-master ~]# sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo

【使用yum方式安装docker】
[root@k8s-master ~]# yum -y install docker-ce

2.2 配置阿里云镜像加速

【master节点、worker01节点、worker02节点】
1、先注册一个阿里云账号。
2、查看阿里云,官方文档: https://cr.console.aliyun.com/cn-qingdao/instances/mirrors
3、进入管理控制台设置密码,开通。
4、查看属于自己的加速器

[root@k8s-master ~]# cat > /etc/docker/daemon.json << EOF
{
    "registry-mirrors": ["https://xxxxxx.mirror.aliyuncs.com",   #更换为自己阿里云加速器
        "https://docker.m.daocloud.io",
        "https://docker.888666222.xyz"
        ],
    "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

【启动Docker服务并设置开机自启动】
[root@k8s-master ~]# systemctl daemon-reload && systemctl restart docker && systemctl enable docker

2.3 cri-dockerd安装

【master节点、worker01节点、worker02节点】

【下载cri-dockerd RPM包】
[root@k8s-master ~]# wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.8/cri-dockerd-0.3.8-3.el7.x86_64.rpm

【安装cri-dockerd】
[root@k8s-master ~]# yum -y install cri-dockerd-0.3.8-3.el7.x86_64.rpm

编辑cri-docker服务单元文件】
[root@k8s-master ~]# sed -i 's/ExecStart=\/usr\/bin\/cri-dockerd --container-runtime-endpoint fd:\/\//ExecStart=\/usr\/bin\/cri-dockerd --pod-infra-container-image=registry.aliyuncs.com\/google_containers\/pause:3.9 --container-runtime-endpoint fd:\/\//' /usr/lib/systemd/system/cri-docker.service

【启动cri-docker服务并设置开机自启动】
[root@k8s-master ~]# systemctl start cri-docker && systemctl enable cri-docker

三、Kubernetes 1.29集群部署

3.1 安装kubelet、kubeadm、kubectl

【master节点、worker01节点、worker02节点】

【配置kubernetes的yum源】
 cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.29/rpm/
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.29/rpm/repodata/repomd.xml.key
EOF

【使用yum方式安装】
[root@k8s-master ~]# yum install -y kubelet kubeadm kubectl

【配置 cgroup 驱动与docker一致】
cat > /etc/sysconfig/kubelet <<EOF
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
EOF

【设置kubelet为开机自启动】
systemctl enable kubelet

3.2 集群镜像准备

【master节点、worker01节点、worker02节点】

【查看集群所需镜像】
[root@k8s-master ~]# kubeadm config images list --kubernetes-version=v1.29.0
registry.k8s.io/kube-apiserver:v1.29.0
registry.k8s.io/kube-controller-manager:v1.29.0
registry.k8s.io/kube-scheduler:v1.29.0
registry.k8s.io/kube-proxy:v1.29.0
registry.k8s.io/coredns/coredns:v1.11.1
registry.k8s.io/pause:3.9
registry.k8s.io/etcd:3.5.12-0

【拉取镜像并进行打标签】
[root@k8s-master ~]# vim images_download.sh
#!/bin/bash
images_list='
registry.aliyuncs.com/google_containers/kube-apiserver:v1.29.0
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.29.0
registry.aliyuncs.com/google_containers/kube-scheduler:v1.29.0
registry.aliyuncs.com/google_containers/kube-proxy:v1.29.0
registry.aliyuncs.com/google_containers/coredns:v1.11.1
registry.aliyuncs.com/google_containers/pause:3.9
registry.aliyuncs.com/google_containers/etcd:3.5.12-0'

for i in $images_list
do
        docker pull $i
done

docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.29.0  registry.k8s.io/kube-controller-manager:v1.29.0
docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.29.0  registry.k8s.io/kube-scheduler:v1.29.0
docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.29.0  registry.k8s.io/kube-apiserver:v1.29.0
docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.29.0  registry.k8s.io/kube-proxy:v1.29.0
docker tag registry.aliyuncs.com/google_containers/etcd:3.5.12-0 registry.k8s.io/etcd:3.5.12-0
docker tag registry.aliyuncs.com/google_containers/coredns:v1.11.1  registry.k8s.io/coredns/coredns:v1.11.1
docker tag registry.aliyuncs.com/google_containers/pause:3.9  registry.k8s.io/pause:3.9

【执行脚本文件】
[root@k8s-master ~]# sh images_download.sh

此处建议进行快照

3.3 集群初始化

【master节点】

【使用kubeadm进行初始化】
[root@k8s-master ~]# kubeadm init --kubernetes-version=v1.29.0 \
--pod-network-cidr=10.244.0.0/16 \
--service-cidr=10.96.0.0/12 \
--apiserver-advertise-address=192.168.100.10 \
--cri-socket unix:///var/run/cri-dockerd.sock \
--image-repository registry.aliyuncs.com/google_containers

3.4 创建配置目录

【master节点】

【创建管理集群文件】
[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master ~]# export KUBECONFIG=/etc/kubernetes/admin.conf

3.5 将worker节点加入集群

【worker01节点、worker02节点】

【重新生成token【master节点】】
[root@k8s-master ~]# kubeadm token create --ttl 0  --print-join-command
kubeadm join 192.168.100.10:6443 --token l0p87y.osc78zqb5n9wpmc7 --discovery-token-ca-cert-hash sha256:567667dd42cdef1c1db11d1a323d97eafd580c3343cde86972bcb41ac3b2bf9f

【worker节点加入集群】
[root@k8s-worker-01 ~]# kubeadm join 192.168.100.10:6443 \
--token l0p87y.osc78zqb5n9wpmc7 \
--discovery-token-ca-cert-hash sha256:567667dd42cdef1c1db11d1a323d97eafd580c3343cde86972bcb41ac3b2bf9f \
--cri-socket=unix:///var/run/cri-dockerd.sock

3.5安装Calico网络插件

【master节点】

【下载 Calico 安装文件】
[root@k8s-worker-01 ~]# wget https://raw.githubusercontent.com/projectcalico/calico/v3.24.1/manifests/calico.yaml

【拉取所需镜像【master节点、worker01节点、worker02节点】】
[root@k8s-worker-01 ~]# vim images_download.sh 
#!/bin/bash
images_list='
docker.888666222.xyz/calico/cni:v3.24.1
docker.888666222.xyz/calico/pod2daemon-flexvol:v3.24.1
docker.888666222.xyz/calico/node:v3.24.1
docker.888666222.xyz/calico/kube-controllers:v3.24.1
docker.888666222.xyz/calico/typha:v3.24.1'

for i in $images_list
do
        docker pull $i
done

docker tag docker.888666222.xyz/calico/cni:v3.24.1 docker.io/calico/cni:v3.24.1
docker tag docker.888666222.xyz/calico/pod2daemon-flexvol:v3.24.1 docker.io/calico/pod2daemon-flexvol:v3.24.1
docker tag docker.888666222.xyz/calico/node:v3.24.1 docker.io/calico/node:v3.24.1
docker tag docker.888666222.xyz/calico/kube-controllers:v3.24.1 docker.io/calico/kube-controllers:v3.24.1
docker tag docker.888666222.xyz/calico/typha:v3.24.1 docker.io/calico/typha:v3.24.1

【执行脚本文件】
[root@k8s-master ~]# sh images_download.sh

【部署 Calico】
[root@k8s-master ~]# kubectl apply -f /root/calico.yaml

【查看Calico状态】
[root@k8s-master ~]# kubectl get -n kube-system pods |grep calico
calico-kube-controllers-9d57d8f49-fpzzc   1/1     Running   0          60s
calico-node-2lcvk                         1/1     Running   0          60s
calico-node-fpsdv                         1/1     Running   0          60s
calico-node-nt5g5                         1/1     Running   0          60s

四、 Kubernetes集群验证

4.1 查看集群情况

【查看所有的节点】
[root@k8s-master ~]# kubectl get nodes
NAME            STATUS   ROLES           AGE   VERSION
k8s-master      Ready    control-plane   91m   v1.29.6
k8s-worker-01   Ready    <none>          86m   v1.29.6
k8s-worker-02   Ready    <none>          86m   v1.29.6

【 查看集群健康情况】
[root@k8s-master ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE   ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-0               Healthy   ok

【查看kubernetes集群pod运行情况】
[root@k8s-master ~]# kubectl get pods -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-9d57d8f49-29278   1/1     Running   0          53m
calico-node-ckg4x                         1/1     Running   0          53m
calico-node-kfstd                         1/1     Running   0          53m
calico-node-sqbzh                         1/1     Running   0          53m
coredns-857d9ff4c9-b29hh                  1/1     Running   0          91m
coredns-857d9ff4c9-vdn2n                  1/1     Running   0          91m
etcd-k8s-master                           1/1     Running   0          92m
kube-apiserver-k8s-master                 1/1     Running   0          92m
kube-controller-manager-k8s-master        1/1     Running   0          92m
kube-proxy-5dvrf                          1/1     Running   0          87m
kube-proxy-w9kx8                          1/1     Running   0          87m
kube-proxy-zfjvj                          1/1     Running   0          91m
kube-scheduler-k8s-master                 1/1     Running   0          92m

4.2 创建pod测试

【创建一个deployment】
[root@k8s-master ~]# kubectl create deployment nginx --image=nginx --replicas=2
deployment.apps/nginx created

【暴露 Deployment 为 NodePort 类型的服务】
[root@k8s-master ~]# kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed

【验证 Deployment 和 Service 状态】
[root@k8s-master ~]# kubectl get deployments,svc
NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx   2/2     2            2           97s

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        96m
service/nginx        NodePort    10.106.52.54   <none>        80:31423/TCP   31s

【测试访问 nginx 服务】
[root@k8s-master ~]# curl -L 192.168.100.10:31423
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

此处建议进行快照

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐