系统配置
  • master - 最低两核心,否则集群初始化失败
主机名IP地址角色操作系统硬件配置
ansible10.62.158.200同步工具节点CentOS 72 Core/4G Memory
master10.62.158.201管理节点CentOS 72 Core/4G Memory
node0110.62.158.202工作节点01CentOS 72 Core/4G Memory
node0210.62.158.203工作节点02CentOS 72 Core/4G Memory
设置当前主机名称
# 10.62.158.201主节点为例
[root@localhost ~]# hostnamectl set-hostname master
# 退出重新登录
[root@localhost ~]# exit
200节点安装ansible,并设置被控主机
  • 上传ansible离线包 - ansible.tar.gz
  • 离线安装ansible
# 解压ansible安装包到当前文件夹下
[root@localhost ~]# tar -xf ansible.tar.gz
# 进入ansible安装包目录
[root@localhost ~]# cd ansible
# 安装ansible
[root@localhost ~]# yum install ./*.rpm -y
# 添加ansible被控主机,删除全部内容,快捷键dG,以下为hosts文件内容
[root@localhost ~]# vim /etc/ansible/hosts
# 配置文件内容
[k8s]
10.62.158.201
10.62.158.202
10.62.158.203
# 使用该命令查看ansible组内主机,k8s为组名,后续可通过组名进行组内主机的批量管理
[root@localhost ~]# ansible k8s --list-host
  hosts (3):
    10.62.158.201
    10.62.158.202
    10.62.158.203
添加免密登录,使ansible操作被控主机时不必使用密码即可操作
# 生成公匙,一直回车即可
[root@localhost ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:Eg3Yp6BU6sCWYhM0N5Kv17UZooiIzDH1LJS+Ju3oNG4 root@localhost.localdomain
The key's randomart image is:
+---[RSA 2048]----+
|.=.+oo.          |
|..B=+ .o.        |
|oB*.o..o.        |
|+=o+ +.+         |
|* B = + S        |
|+* * . +         |
| o*              |
|oE..             |
|oo               |
+----[SHA256]-----+

# 复制私钥,输入yes与被控主机的登录密码,按步操作即可
[root@localhost ~]# for ip in 10.62.158.{201..203}
> do
> ssh-copy-id $ip
> done

# 测试是否可正常连接
[root@localhost ~]# ssh 10.62.158.201
Last login: Wed Apr  3 16:13:57 2024 from 10.62.158.200
[root@master ~]# exit
登出
Connection to 10.62.158.201 closed.

以下前期环境准备需要在所有节点都执行,注意观察命令行查看当前操作主机是哪个

配置集群之间本地解析,集群在初始化时需要能够解析主机名
[root@ansible ~]# vim /etc/hosts
10.62.158.201 master
10.62.158.202 node01
10.62.158.203 node02
# 拷贝配置文件到k8s组内其他机器中
[root@ansible ~]# ansible k8s -m copy -a 'src=/etc/hosts dest=/etc'
开启bridge网桥过滤功能

bridge(桥接网络)是Linux系统中的一种虚拟网络设备,它充当一个虚拟的网桥,为集群内的容器提供网络通信功能,容器就可以通过这个bridge与其他容器或外部网络通信了

  • net.bridge.bridge-nf-call-ip6tables = 1 - 对网桥上的IPv6数据包通过ip6tables处理
  • net.bridge.bridge-nf-call-iptables = 1 - 对网桥上的IPv4数据包通过iptables处理
  • net.ipv4.ip_forward = 1 - 开启IPv4路由转发,来实现集群中的容器与外部网络的通信
[root@ansible ~]# vim k8s.conf
# 配置文件内容
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
# 拷贝配置文件到k8s组内其他机器中
[root@ansible ~]# ansible k8s -m copy -a 'src=k8s.conf dest=/etc/sysctl.d/'

由于开启bridge功能,需要加载br_netilter模块来允许bridge路由的数据包经过iptables防火墙处理

  • 使用ansible在三台主机上同步执行命令:modprobe br_netfilter && lsmod | grep br_netfilter
  • modprobe - 命令可以加载内核模块
  • br_netfilter - 该模块允许bridge设备上的数据包经过iptables防火墙处理
[root@ansible ~]# ansible k8s -m shell -a 'modprobe br_netfilter && lsmod | grep br_netfilter'
10.62.158.203 | CHANGED | rc=0 >>
br_netfilter           22256  0 
bridge                151336  1 br_netfilter
10.62.158.202 | CHANGED | rc=0 >>
br_netfilter           22256  0 
bridge                151336  1 br_netfilter
10.62.158.201 | CHANGED | rc=0 >>
br_netfilter           22256  0 
bridge                151336  1 br_netfilter

从配置文件 k8s.conf 加载内核参数设置,使上述配置生效

  • 使用ansible在三台主机上同步执行命令:sysctl -p /etc/sysctl.d/k8s.conf
[root@ansible ~]# ansible k8s -m shell -a 'sysctl -p /etc/sysctl.d/k8s.conf'
10.62.158.202 | CHANGED | rc=0 >>
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
10.62.158.203 | CHANGED | rc=0 >>
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
10.62.158.201 | CHANGED | rc=0 >>
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
配置ipvs功能

在k8s中Service有两种代理模式,一种是基于iptables的,一种是基于ipvs的,两者对比ipvs负载均衡算法更加的灵活,且带有健康检查的功能,如果想要使用ipvs代理模式,需要手动载入ipvs模块。
ipsetipvsadm 是两个与网络管理和负载均衡相关的软件包,在k8s代理模式中,提供多种负载均衡算法,如轮询(Round Robin)、最小连接(Least Connection)和加权最小连接(Weighted Least Connection)等

[root@ansible ~]# ansible k8s -m shell -a 'yum install ipset ipvsadm -y'

将需要加载的ipvs相关模块写入到文件中

  • ip_vs_rr - 轮询算法
  • ip_vs_wrr - 加权轮询算法
  • ip_vs_sh - hash算法
  • nf_conntrack - 链路追踪
  • ansible k8s -m copy -a - ansible复制文件命令
  • ansible k8s -m shell -a - ansible执行脚本命令
[root@ansible ~]# vim ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack

# 复制文件到被控主机
[root@ansible ~]# ansible k8s -m copy -a 'src=ipvs.modules dest=/etc/sysconfig/modules'

# 添加ipvs.modules文件执行权限
[root@ansible ~]# ansible k8s -m shell -a 'chmod +x /etc/sysconfig/modules/ipvs.modules'
[WARNING]: Consider using the file module with mode rather than running 'chmod'.  If you need to use command because file is insufficient you can add 'warn: false' to this command task or set
'command_warnings=False' in ansible.cfg to get rid of this message.
10.62.158.203 | CHANGED | rc=0 >>

10.62.158.201 | CHANGED | rc=0 >>

10.62.158.202 | CHANGED | rc=0 >>

# 执行脚本文件
[root@ansible ~]# ansible k8s -m shell -a '/etc/sysconfig/modules/ipvs.modules'
10.62.158.203 | CHANGED | rc=0 >>

10.62.158.202 | CHANGED | rc=0 >>

10.62.158.201 | CHANGED | rc=0 >>

# 被控主机上查看ipvs是否配置成功
[root@master ~]# lsmod |grep ip_vs
ip_vs_sh               12688  0 
ip_vs_wrr              12697  0 
ip_vs_rr               12600  0 
ip_vs                 145497  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          133095  1 ip_vs
libcrc32c              12644  3 xfs,ip_vs,nf_conntrack
关闭SWAP分区

为了保证kubelet正常工作,k8s强制要求禁用,否则集群初始化失败

# 临时关闭
[root@ansible ~]# ansible k8s -m shell -a 'swapoff -a'
10.62.158.202 | CHANGED | rc=0 >>

10.62.158.201 | CHANGED | rc=0 >>

10.62.158.203 | CHANGED | rc=0 >>

# 被控主机上查看是否关闭swap
[root@master ~]# free -h
              total        used        free      shared  buff/cache   available
Mem:           3.8G        119M        3.3G         11M        478M        3.5G
Swap:            0B          0B          0B

# 永久关闭
[root@ansible ~]# ansible k8s -m shell -a "sed -ri 's/.*swap.*/#&/' /etc/fstab"
[WARNING]: Consider using the replace, lineinfile or template module rather than running 'sed'.  If you need to use command because replace, lineinfile or template is insufficient you can add 'warn: false' to
this command task or set 'command_warnings=False' in ansible.cfg to get rid of this message.
10.62.158.202 | CHANGED | rc=0 >>

10.62.158.203 | CHANGED | rc=0 >>

10.62.158.201 | CHANGED | rc=0 >>
Containerd环境准备

添加containerd的yum仓库

[root@ansible ~]# vim containerd.repo
# 配置文件内容
[containerd]
name=containerd
baseurl=https://download.docker.com/linux/centos/7/\$basearch/stable
enabled=1
gpgcheck=1
gpgkey=https://download.docker.com/linux/centos/gpg

# 复制仓库文件到被控主机
[root@ansible ~]# ansible k8s -m copy -a 'src=containerd.repo dest=/etc/yum.repos.d'

# 检查被控主机仓库集合,已有containerd仓库
[root@master yum.repos.d]# yum repolist
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.aliyun.com
 * extras: mirrors.aliyun.com
 * updates: mirrors.aliyun.com
containerd                                                                                                                                                                                | 3.5 kB  00:00:00     
(1/2): containerd/x86_64/updateinfo                                                                                                                                                       |   55 B  00:00:00     
(2/2): containerd/x86_64/primary_db                                                                                                                                                       | 140 kB  00:00:01     
源标识                                                                                 源名称                                                                                                              状态
base/7/x86_64                                                                          CentOS-7 - Base - mirrors.aliyun.com                                                                                10,072
containerd/x86_64                                                                      containerd                                                                                                             308
epel/x86_64                                                                            Extra Packages for Enterprise Linux 7 - x86_64                                                                      13,798
extras/7/x86_64                                                                        CentOS-7 - Extras - mirrors.aliyun.com                                                                                 526
updates/7/x86_64                                                                       CentOS-7 - Updates - mirrors.aliyun.com                                                                              5,802
repolist: 30,506

安装containerd

[root@ansible ~]# ansible k8s -m shell -a 'yum install containerd.io-1.6.20-3.1.el7.x86_64 -y'

生成containerd配置文件

[root@ansible ~]# ansible k8s -m shell -a 'containerd config default | tee /etc/containerd/config.toml'

创建执行文件,替换config.toml配置文件中部分内容

  • ansible k8s -m script -a ==> 执行脚本文件
[root@ansible ~]# vim cgroup.sh
# 启用Cgroup控制组,用于限制进程的资源使用量,如CPU、内存等
sed -i 's#SystemdCgroup = false#SystemdCgroup = true#' /etc/containerd/config.toml
# 替换文件中pause镜像的下载地址
sed -i 's#sandbox_image = "registry.k8s.io/pause:3.6"#sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.6"#' /etc/containerd/config.toml

# 执行cgroup.sh脚本,k8s组内主机直接生效
[root@ansible ~]# ansible k8s -m script -a 'cgroup.sh'

# 被控主机中查看config.toml配置文件,查看SystemdCgroup与sandbox_image内容是否已被替换
[root@master yum.repos.d]# vim /etc/containerd/config.toml

在k8s环境中,kebelet通过containerd.sock文件与containerd进行通信,对容器进行管理,指定containerd套接字文件地址

[root@ansible ~]# vim crictl.yaml
runtime-endpoint: unix:///var/run/containerd/containerd.sock
image-endpoint: unix:///var/run/containerd/containerd.sock
timeout: 10
debug: false
[root@ansible ~]# ansible k8s -m copy -a 'src=crictl.yaml dest=/etc'

启动containerd

[root@ansible ~]# ansible k8s -m shell -a 'systemctl enable containerd --now'
kubeadm部署k8s集群

kubernetes集群有多种部署方式,目前常用的部署方式有如下两种:

  • kubeadm部署方式:kubeadm是一个快速搭建kubernetes集群的工具
  • 二进制包部署方式:从官网下载每一个组件的二进制包,依次去安装,部署麻烦
  • 其他方式:通过一些开源的工具搭建,例如:sealos
    通过Kubeadm方式部署k8s集群,需要配置k8s的yum仓库来安装集群所需的软件,本次使用阿里云yum源
[root@ansible ~]# vim kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
[root@ansible ~]# ansible k8s -m copy -a 'src=kubernetes.repo dest=/etc/yum.repos.d'
# 被控主机上查看是否存在仓库文件
[root@master yum.repos.d]# cd /etc/yum.repos.d/
[root@master yum.repos.d]# ls
CentOS-Base.repo  containerd.repo  epel.repo  kubernetes.repo
# 查看仓库中kubernetes的版本都有哪些
[root@master yum.repos.d]# yum list --showduplicates kubeadm

安装集群软件,本次使用k8s 1.28.2-0版本,不指定版本时,使用最新版本

  • kubeadm:用于初始化集群,并配置集群所需的组件并生成对应的安全证书和令牌,集群内部通过https协议进行通信
  • bubelet:负责与Master节点通信,并根据Master节点的调度决策来创建、更新和删除Pod,同时维护所有节点上的容器状态
  • kubectl:用于管理k8s集群的一个命令行工具
[root@ansible ~]# ansible k8s -m shell -a 'yum install kubeadm kubelet kubectl -y'
# 安装完成后被控主机上查看是否安装成功
[root@master yum.repos.d]# rpm -q kubeadm kubelet kubectl
kubeadm-1.28.2-0.x86_64
kubelet-1.28.2-0.x86_64
kubectl-1.28.2-0.x86_64

# 第二种方式,指定安装版本
[root@ansible ~]# ansible k8s -m shell -a 'yum install kubeadm-1.28.2 kubelet-1.28.2 kubectl-1.28.2 -y'

配置kubelet的Cgroup,用于限制进程的资源使用量,如CPU、内存等

[root@ansible ~]# vim kubelet
KUBELET_EXTRA_AGRS="--cgroup-driver=systemd"
[root@ansible ~]# ansible k8s -m copy -a 'src=kubelet dest=/etc/sysconfig/'
# 设置kubelet随机自启
[root@ansible ~]# ansible k8s -m shell -a 'systemctl enable kubelet'
初始化集群
# 退出ansible主机200,以下为单独配置内容,在master节点执行即可
[root@ansible ~]# exit

# 查看集群所需镜像都有哪些
[root@master ~]# kubeadm config images list
I0403 18:58:08.209755   21782 version.go:256] remote version is much newer: v1.29.3; falling back to: stable-1.28
W0403 18:58:18.213063   21782 version.go:104] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.28.txt": Get "https://cdn.dl.k8s.io/release/stable-1.28.txt": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
W0403 18:58:18.213086   21782 version.go:105] falling back to the local client version: v1.28.2
registry.k8s.io/kube-apiserver:v1.28.2
registry.k8s.io/kube-controller-manager:v1.28.2
registry.k8s.io/kube-scheduler:v1.28.2
registry.k8s.io/kube-proxy:v1.28.2
registry.k8s.io/pause:3.9
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/coredns/coredns:v1.10.1

# 打印一个初始化集群的配置文件
[root@master ~]# kubeadm config print init-defaults > kubeadm-config.yml

# 修改集群初始化配置文件
# advertiseAddress修改为当前主机ip地址
# name为当前节点名称
# mageRepository为阿里云仓库,否则无法下载镜像
localAPIEndpoint:
  advertiseAddress: 10.62.158.201
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: master
  taints: nul
mageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers

# 基于集群配置文件初始化k8s集群,等待即可
[root@master ~]# kubeadm init --config kubeadm-config.yml
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.62.158.201:6443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:7d9a344fd67d55f151bec369331c41003e18149a60df4583aa04d09acf48bdfd
	
# 集群初始化成功后执行提示信息
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

# 查看提示命令是否成功,至此管理节点初始化成功
[root@master ~]# l.
.  ..  .ansible  .bash_history  .bash_logout  .bash_profile  .bashrc  .cshrc  .kube  .pki  .ssh  .tcshrc  .viminfo
工作节点加入集群
# 工作节点加入到集群中,使用管理节点中初始化成功的提示信息
kubeadm join 10.62.158.201:6443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:7d9a344fd67d55f151bec369331c41003e18149a60df4583aa04d09acf48bdfd
[root@node01 ~]# kubeadm join 10.62.158.201:6443 --token abcdef.0123456789abcdef \
> --discovery-token-ca-cert-hash sha256:7d9a344fd67d55f151bec369331c41003e18149a60df4583aa04d09acf48bdfd
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@node02 ~]# kubeadm join 10.62.158.201:6443 --token abcdef.0123456789abcdef \
> --discovery-token-ca-cert-hash sha256:7d9a344fd67d55f151bec369331c41003e18149a60df4583aa04d09acf48bdfd 
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

# 查看集群状态,NotReady代表集群还没准备好,需要添加pod网络后才能使用
[root@master ~]# kubectl get node
NAME     STATUS     ROLES           AGE     VERSION
master   NotReady   control-plane   8m9s    v1.28.2
node01   NotReady   <none>          2m10s   v1.28.2
node02   NotReady   <none>          80s     v1.28.2
添加Calico网络

Calico和Flanner是两种流行的k8s网络插件,它们都为集群的Pod提供网络功能。然而,它们在实现方式和功能上存在一些重要的区别

  • 在master上安装Calico网络即可
# 下载Calico配置文件,k8s1.18-1.28版本均可使用此文件
[root@master ~]# wget https://raw.githubusercontent.com/projectcalico/calico/v3.24.1/manifests/calico.yaml
--2024-04-03 19:23:47--  https://raw.githubusercontent.com/projectcalico/calico/v3.24.1/manifests/calico.yaml
正在解析主机 raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.110.133, 185.199.109.133, ...
正在连接 raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... 已连接。
已发出 HTTP 请求,正在等待回应... 200 OK
长度:234906 (229K) [text/plain]
正在保存至: “calico.yaml”

100%[=======================================================================================================================================================================>] 234,906      440B/s 用时 8m 54s  

2024-04-03 19:32:49 (440 B/s) - 已保存 “calico.yaml” [234906/234906])

# 根据配置文件创建Calico网络
[root@master ~]# kubectl apply -f calico.yaml
poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created

# 查看集群内组件状态,组件是否启动成功,等待即可
[root@master ~]# kubectl get pod -n kube-system
NAME                                      READY   STATUS              RESTARTS   AGE
calico-kube-controllers-9d57d8f49-q5tb5   0/1     ContainerCreating   0          101s
calico-node-c4rl2                         0/1     Running             0          101s
calico-node-ctdth                         1/1     Running             0          101s
calico-node-nnv9l                         0/1     Init:0/3            0          101s
coredns-6554b8b87f-fpscn                  0/1     ContainerCreating   0          30m
coredns-6554b8b87f-z6sbm                  0/1     ContainerCreating   0          30m
etcd-master                               1/1     Running             0          30m
kube-apiserver-master                     1/1     Running             0          30m
kube-controller-manager-master            1/1     Running             0          30m
kube-proxy-87dqz                          1/1     Running             0          30m
kube-proxy-d9q6b                          1/1     Running             0          24m
kube-proxy-wp7wk                          1/1     Running             0          24m
kube-scheduler-master                     1/1     Running             0          30m

# calico都为Running时,集群部署完成
[root@master ~]# kubectl get pod -n kube-system |grep calico
calico-kube-controllers-9d57d8f49-q5tb5   1/1     Running    0          3m2s
calico-node-c4rl2                         1/1     Running    0          3m2s
calico-node-ctdth                         1/1     Running    0          3m2s
calico-node-nnv9l                         0/1     Init:0/3   0          3m2s
集群部署完成后,添加nginx配置文件,部署nginx应用
[root@master ~]# kubectl get node
NAME     STATUS   ROLES           AGE     VERSION
master   Ready    control-plane   3d17h   v1.28.2
node01   Ready    <none>          3d17h   v1.28.2
node02   Ready    <none>          3d17h   v1.28.2

# 添加nginx配置文件
[root@master ~]# vim nginx.yml
apiVersion: v1
kind: Pod
metadata: 
  name: nginx
  labels: 
    app: nginx
spec:
  containers:
  - name: nginx
    image: nginx:1.20.2
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
spec:
  type: NodePort
  selector:
    app: nginx
  ports:
  - port: 80
    targetPort: 80
    nodePort: 30000
# 执行配置文件,生成nginx应用
[root@master ~]# kubectl apply -f nginx.yml 
pod/nginx created
service/nginx-svc created

# 部署成功,查看容器状态
[root@master ~]# kubectl get pod
NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          2m4s

# 获取k8s中服务端口列表
[root@master ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        3d17h
nginx-svc    NodePort    10.109.228.46   <none>        80:30000/TCP   4m33s
访问集群任意节点,访问nginx服务,打完收工!!
http://10.62.158.201:30000/
http://10.62.158.202:30000/
http://10.62.158.203:30000/

在这里插入图片描述

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐