k8s集群规划
  • master - 最低两核心,否则集群初始化失败
主机名IP地址角色操作系统硬件配置
master10.62.158.200管理节点CentOS 72 Core/4G Memory
node0110.62.158.201工作节点01CentOS 72 Core/4G Memory
node0210.62.158.202工作节点02CentOS 72 Core/4G Memory
集群环境部署

按照集群规划修改每个节点主机名

# 管理节点
[root@localhost ~]# hostnamectl set-hostname master
[root@localhost ~]# exit
登出

Connection closed by foreign host.

Disconnected from remote host(测试机 - 200) at 12:21:59.

# 工作节点01
[root@localhost ~]# hostnamectl set-hostname node01
[root@localhost ~]# exit
登出

Connection closed by foreign host.

Disconnected from remote host(测试机 - 201) at 12:24:37.

# 工作节点02
[root@localhost ~]# hostnamectl set-hostname node02
[root@localhost ~]# exit
登出

Connection closed by foreign host.

Disconnected from remote host(测试机 - 202) at 12:25:23.
提示:以下前期环境准备需要在所有节点都执行

三个节点命令同步设置

  • 注意:此方式需确保所有主机命令均执行完毕才能进行下一步操作
    在这里插入图片描述
    配置集群之间本地解析,集群在初始化时需要能够解析到每个节点的主机名
[root@master ~]# vim /etc/hosts
10.62.158.200 master
10.62.158.201 node01
10.62.158.202 node02
开启bridge网桥过滤功能

bridge (桥接网络) 是 Linux 系统中的一种虚拟网络设备,它充当一个虚拟的交换机,为集群内的容器提供网络通信功能,容器就可以通过这个 bridge 与其他容器或外部网络通信了

  • net.bridge.bridge-nf-call-ip6tables = 1 - 对网桥上的IPv6数据包通过ip6tables处理
  • net.bridge.bridge-nf-call-iptables = 1 - 对网桥上的IPv4数据包通过iptables处理
  • net.ipv4.ip_forward = 1 - 开启IPv4路由转发,来实现集群中的容器与外部网络的通信
[root@master ~]# cat > /etc/sysctl.d/k8s.conf <<EOF
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> net.ipv4.ip_forward = 1
> EOF

由于开启 bridge 功能,需要加载 br_netfilter 模块来允许在 bridge 设备上的数据包经过 iptables 防火墙处理

  • modprobe - 可以加载内核模块
  • br_netfilter - 该模块允许bridge设备上的数据包经过iptables防火墙处理
[root@master ~]# modprobe br_netfilter && lsmod | grep br_netfilter
br_netfilter           22256  0 
bridge                151336  1 br_netfilter

从配置文件 k8s.conf 加载内核参数设置,使上述配置生效

[root@master ~]# sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
配置ipvs代理功能

在k8s中 Service 有两种代理模式,一种是基于 iptables 的,一种是基于 ipvs ,两者对比 ipvs 负载均衡算法更加的灵活,且带有健康检查的功能,如果想要使用 ipvs 模式,需要手动载入 ipvs 模块。
ipsetipvsadm 是两个与网络管理和负载均衡相关的软件包,提供多种负载均衡算法,如轮询(Round Robin)、加权轮询(Weighted Round Robin)、最小连接(Least Connection)、加权最小连接(Weighted Least Connection)等;

[root@master ~]# yum install ipset ipvsadm -y

将需要加载的 ipvs 相关模块写入到文件中

  • ip_vs - 提供负载均衡的模块
  • ip_vs_rr - 轮询算法的模块(默认)
  • ip_vs_wrr - 加权轮询算法的模块,根据后端服务器的权重值转发请求
  • ip_vs_sh - 哈希算法的模块,同一客户端的请求始终被分发到相同的后端服务器,保证会话一致性
  • nf_conntrack - 链接跟踪的模块,用于跟踪一个连接的状态,例如 TCP 握手、数据传输和连接关闭等
[root@master ~]# cat > /etc/sysconfig/modules/ipvs.modules <<EOF
> #!/bin/bash
> modprobe -- ip_vs
> modprobe -- ip_vs_rr
> modprobe -- ip_vs_wrr
> modprobe -- ip_vs_sh
> modprobe -- nf_conntrack
> EOF

执行文件来加载模块

# 添加执行权限
[root@master ~]# chmod +x /etc/sysconfig/modules/ipvs.modules

# 查看权限是否已分配
[root@master ~]# ll /etc/sysconfig/modules/ipvs.modules
-rwxr-xr-x 1 root root 119 4月  12 12:38 /etc/sysconfig/modules/ipvs.modules

# 执行配置文件
[root@master ~]# /etc/sysconfig/modules/ipvs.modules

# 查看ipvs是否配置成功
[root@master ~]# lsmod | grep ip_vs
ip_vs_sh               12688  0 
ip_vs_wrr              12697  0 
ip_vs_rr               12600  0 
ip_vs                 145497  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          133095  1 ip_vs
libcrc32c              12644  3 xfs,ip_vs,nf_conntrack
关闭SWAP分区

为了保证 kubelet 正常工作要求禁用 SWAP,否则集群初始化失败

  • 临时关闭
[root@master ~]# swapoff -a
  • 永久关闭 - 使用此方式永久性关闭即可
[root@master ~]# sed -ri 's/.*swap.*/#&/' /etc/fstab
  • 检查swap
[root@master ~]# free -h
              total        used        free      shared  buff/cache   available
Mem:           3.8G        110M        3.3G         11M        454M        3.5G
Swap:            0B          0B          0B
安装Docker - 在线安装

安装 yum-utils 软件提供 yum-config-manager 命令

[root@master ~]# yum install yum-utils -y

添加阿里云 docker-ce 仓库

[root@master ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
已加载插件:fastestmirror
adding repo from: http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
grabbing file http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo

安装 docker 软件包

[root@master ~]# yum install docker-ce-20.10.9-3.el7 -y

启用 Cgroup 控制组,用于限制进程的资源使用量,如CPU、内存资源

[root@master ~]# mkdir /etc/docker
[root@master ~]# cat > /etc/docker/daemon.json <<EOF
> {
>         "exec-opts": ["native.cgroupdriver=systemd"]
> }
> EOF

启动 docker 并设置 docker 随机自启

[root@master ~]# systemctl enable docker --now
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

查看 docker 是否安装成功

[root@master ~]# docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
集群部署方式

k8s集群有多种部署方式,目前常用的部署方式有如下两种:

  • kubeadm 部署方式:kubeadm 是一个快速搭建 kubernetes 的集群工具;
  • 二进制包部署方式:从官网下载每个组件的二进制包,依次去安装,部署麻烦;
  • 其他方式:通过一些开源的工具搭建,例如:sealos

通过 Kubeadm 方式部署 k8s 集群,需要配置 k8s 软件仓库来安装集群所需软件,使用阿里云YUM

[root@master ~]# cat > /etc/yum.repos.d/k8s.repo <<EOF
> [kubernetes]
> name=Kubernetes
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
> enabled=1
> gpgcheck=1
> repo_gpgcheck=1
> gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF

安装集群软件,本实验安装 k8s 1.23.0 版本软件

  • kubeadm:用于初始化集群,并配置集群所需的组件并生成对应的安全证书和令牌;
  • kubelet:负责与 Master 节点通信,并根据 Master 节点的调度决策来创建、更新和删除 Pod,同时维护 Node 节点上的容器状态;
  • kubectl:用于管理 k8s 集群的一个命令行工具;
[root@master ~]# yum install -y kubeadm-1.23.0-0 kubelet-1.23.0-0 kubectl-1.23.0-0

配置 kubelet 启用 Cgroup 控制组,用于限制进程的资源使用量,如CPU、内存等

[root@master ~]# cat > /etc/sysconfig/kubelet <<EOF
> KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
> EOF

设置 kubelet 开机自启动即可,集群初始化后自动启动

[root@master ~]# systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
集群初始化 - 在master节点初始化集群即可

取消多主机同步控制功能
在这里插入图片描述
查看集群所需镜像文件

[root@master ~]# kubeadm config images list

以下是集群初始化所需的集群组件镜像

W0412 13:05:08.946167   19834 version.go:103] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get "https://cdn.dl.k8s.io/release/stable-1.txt": dial tcp 146.75.113.55:443: i/o timeout (Client.Timeout exceeded while awaiting headers)
W0412 13:05:08.946214   19834 version.go:104] falling back to the local client version: v1.23.0
k8s.gcr.io/kube-apiserver:v1.23.0
k8s.gcr.io/kube-controller-manager:v1.23.0
k8s.gcr.io/kube-scheduler:v1.23.0
k8s.gcr.io/kube-proxy:v1.23.0
k8s.gcr.io/pause:3.6
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/coredns/coredns:v1.8.6

需要创建集群初始化配置文件

[root@master ~]# kubeadm config print init-defaults > kubeadm-config.yml
[root@master ~]# ls
anaconda-ks.cfg  kubeadm-config.yml  sysconfigure.sh

配置文件需要修改如下内容

[root@master ~]# vim kubeadm-config.yml
  • 以下是需要修改的内容
# 本机IP地址
advertiseAddress: 10.62.158.200

# 本机名称
name: master

# 集群镜像下载地址,修改为阿里云
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers

永久证书

  • GoLang环境安装
# 下载环境文件
[root@master ~]# wget https://studygolang.com/dl/golang/go1.17.6.linux-amd64.tar.gz
[root@master ~]# ls
anaconda-ks.cfg  go1.17.6.linux-amd64.tar.gz  kubeadm-config.yml  sysconfigure.sh
# 文件解压
[root@master ~]# tar -xvf go1.17.6.linux-amd64.tar.gz -C /usr/local/
# 配置环境变量
[root@master ~]# echo "export PATH=$PATH:/usr/local/go/bin" >>/etc/profile
[root@master ~]# source /etc/profile
# 验证go环境
[root@master ~]# go versio
go version go1.17.6 linux/amd64
  • 下载kubernetes源码
# 下载对应的源码
[root@master ~]# wget https://github.com/kubernetes/kubernetes/archive/v1.23.0.tar.gz
[root@master ~]# ls
anaconda-ks.cfg  go1.17.6.linux-amd64.tar.gz  kubeadm-config.yml  kubernetes-1.23.0.tar.gz  sysconfigure.sh
# 源码解压
[root@master ~]# tar -zxvf kubernetes-1.23.0.tar.gz
[root@master ~]# ls
anaconda-ks.cfg  go1.17.6.linux-amd64.tar.gz  kubeadm-config.yml  kubernetes-1.23.0  kubernetes-1.23.0.tar.gz  sysconfigure.sh
  • 修改证书有效期
[root@master ~]# cd kubernetes-1.23.0
[root@master kubernetes-1.23.0]# vim ./cmd/kubeadm/app/constants/constants.go
CertificateValidity = time.Hour * 24 * 365 * 100
[root@master kubernetes-1.23.0]# vim staging/src/k8s.io/client-go/util/cert/cert.go
NotAfter:              now.Add(duration365d * 100).UTC(),
  • 编译源代码文件
[root@master kubernetes-1.23.0]# make WHAT=cmd/kubeadm GOFLAGS=-v

注意:若出现 ./hack/run-in-gopath.sh:行34: _output/bin/prerelease-lifecycle-gen: 权限不够 问题时,需要添加权限后再编译源码文件

[root@master kubernetes-1.23.0]# yum install rsync jq -y
[root@master bin]# chmod +x _output/bin/prerelease-lifecycle-gen
[root@master bin]# chmod +x _output/bin/deepcopy-gen
  • 查看编译后kubeadm二进制文件
[root@master kubernetes-1.23.0]# ls -l _output/bin/
总用量 79012
-rwxr-xr-x 1 root root  6275072 5月   9 18:19 conversion-gen
-rwxr-xr-x 1 root root  5996544 5月   9 18:19 deepcopy-gen
-rwxr-xr-x 1 root root  6000640 5月   9 18:19 defaulter-gen
-rwxr-xr-x 1 root root  3376695 5月   9 18:19 go2make
-rwxr-xr-x 1 root root 45170688 5月   9 18:43 kubeadm
-rwxr-xr-x 1 root root  8114176 5月   9 18:19 openapi-gen
-rwxr-xr-x 1 root root  5971968 5月   9 18:19 prerelease-lifecycle-gen
  • 备份原有的kubeadm文件
[root@master kubernetes-1.23.0]# cp /usr/bin/kubeadm /usr/bin/kubeadm_bak20240510
  • 替换新编译的kubeadm文件覆盖原始的kubeadm文件
# 多管理节点时,其他管理节点均需使用此kubeadm二进制文件
[root@master kubernetes-1.23.0]# cp /root/kubernetes-1.23.0/_output/bin/kubeadm /usr/bin/

集群初始化

  • --upload-certs - 初始化过程将生成证书,并将其上传到etcd存储中,避免证书被移动或者删除,也不会影响集群
[root@master ~]# kubeadm init --config kubeadm-config.yml --upload-certs
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.62.158.200:6443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:5404ffd10bf8990fe682758c185aaee6565a4040cfac741c1efa0b85dc334655

根据集群初始化后的提示,执行以下命令生成集群管理员配置文件

[root@master01 ~]# mkdir -p $HOME/.kube
[root@master01 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master01 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

根据提示将 node 节点加入集群后,在 master 节点验证

  • node01 节点加入集群
[root@node01 ~]# kubeadm join 10.62.158.200:6443 --token abcdef.0123456789abcdef \
> --discovery-token-ca-cert-hash sha256:5404ffd10bf8990fe682758c185aaee6565a4040cfac741c1efa0b85dc334655
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

  • node02 节点加入集群
[root@node02 ~]# kubeadm join 10.62.158.200:6443 --token abcdef.0123456789abcdef \
> --discovery-token-ca-cert-hash sha256:5404ffd10bf8990fe682758c185aaee6565a4040cfac741c1efa0b85dc334655 
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
  • 查看集群三节点状态
[root@master ~]# kubectl get nodes
NAME     STATUS     ROLES                  AGE     VERSION
master   NotReady   control-plane,master   4m17s   v1.23.0
node01   NotReady   <none>                 90s     v1.23.0
node02   NotReady   <none>                 45s     v1.23.0
  • 查看证书是否为100年
[root@master ~]# kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Apr 16, 2124 01:28 UTC   99y                                     no      
apiserver                  Apr 16, 2124 01:28 UTC   99y             ca                      no      
apiserver-etcd-client      Apr 16, 2124 01:28 UTC   99y             etcd-ca                 no      
apiserver-kubelet-client   Apr 16, 2124 01:28 UTC   99y             ca                      no      
controller-manager.conf    Apr 16, 2124 01:28 UTC   99y                                     no      
etcd-healthcheck-client    Apr 16, 2124 01:28 UTC   99y             etcd-ca                 no      
etcd-peer                  Apr 16, 2124 01:28 UTC   99y             etcd-ca                 no      
etcd-server                Apr 16, 2124 01:28 UTC   99y             etcd-ca                 no      
front-proxy-client         Apr 16, 2124 01:28 UTC   99y             front-proxy-ca          no      
scheduler.conf             Apr 16, 2124 01:28 UTC   99y                                     no      

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Apr 16, 2124 01:28 UTC   99y             no      
etcd-ca                 Apr 16, 2124 01:28 UTC   99y             no      
front-proxy-ca          Apr 16, 2124 01:28 UTC   99y             no 
部署Calico网络

CalicoFlannel 是两种流行的 k8s 网络插件,它们都为集群中的 Pod 提供网络功能。然而,它们在实现方式和功能上有一些重要区别:

网络模型的区别:

  • Calico 使用 BGP(边界网关协议)作为其底层网络模型。它利用 BGP 为每个 Pod 分配一个唯一的 IP 地址,并在集群内部进行路由。Calico 支持网络策略,可以对流量进行精细控制,允许或拒绝特定的通信。
  • Flannel 则采用了一个简化的覆盖网络模型。它为每个节点分配一个 IP 地址子网,然后在这些子网之间建立覆盖网络。FlannelPod 的数据包封装到一个更大的网络数据包中,并在节点之间进行转发。Flannel 更注重简单和易用性,不提供与 Calico 类似的网络策略功能。

性能的区别:

  • 由于 Calico 使用 BGP 进行路由,其性能通常优于 FlannelCalico 可以实现直接的 PodPod 通信,而无需在节点之间进行额外的封装和解封装操作。这使得 Calico 在大型或高度动态的集群中具有更好的性能。
  • Flannel 的覆盖网络模型会导致额外的封装和解封装开销,从而影响网络性能。对于较小的集群或对性能要求不高的场景,这可能并不是一个严重的问题。

master 节点安装下载 Calico 的yaml文件

[root@master ~]# wget https://raw.githubusercontent.com/projectcalico/calico/v3.24.1/manifests/calico.yaml
[root@master ~]# ls
anaconda-ks.cfg  calico.yaml  kubeadm-config.yml  sysconfigure.sh

创建 calico 网络

[root@master ~]# kubectl apply -f calico.yaml

查看 k8s 命名空间

[root@master ~]# kubectl get ns
NAME              STATUS   AGE
default           Active   10m
kube-node-lease   Active   10m
kube-public       Active   10m
kube-system       Active   10m

查看 calicoPod 状态,等待所有组件状态都为 Runningk8s 安装完成

[root@master ~]# kubectl get pod -n kube-system
NAME                                       READY   STATUS     RESTARTS   AGE
calico-kube-controllers-66966888c4-tdmzk   1/1     Running    0          118s
calico-node-8k8bt                          0/1     Init:0/3   0          118s
calico-node-kk9pd                          0/1     Init:2/3   0          118s
calico-node-x6k26                          1/1     Running    0          118s
coredns-65c54cc984-4gmfs                   1/1     Running    0          11m
coredns-65c54cc984-c9k7s                   1/1     Running    0          11m
etcd-master                                1/1     Running    0          11m
kube-apiserver-master                      1/1     Running    0          11m
kube-controller-manager-master             1/1     Running    0          11m
kube-proxy-8m6gs                           1/1     Running    0          11m
kube-proxy-9j9bg                           1/1     Running    0          8m7s
kube-proxy-smkwz                           1/1     Running    0          8m52s
kube-scheduler-master                      1/1     Running    0          11m

[root@master ~]# kubectl get pod -n kube-system | grep calico
calico-kube-controllers-66966888c4-tdmzk   1/1     Running    0          3m31s
calico-node-8k8bt                          0/1     Init:2/3   0          3m31s
calico-node-kk9pd                          1/1     Running    0          3m31s
calico-node-x6k26                          1/1     Running    0          3m31s

# Calico安装完毕
[root@master ~]# kubectl get pod -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-66966888c4-tdmzk   1/1     Running   0          4m31s
calico-node-8k8bt                          1/1     Running   0          4m31s
calico-node-kk9pd                          1/1     Running   0          4m31s
calico-node-x6k26                          1/1     Running   0          4m31s
coredns-65c54cc984-4gmfs                   1/1     Running   0          14m
coredns-65c54cc984-c9k7s                   1/1     Running   0          14m
etcd-master                                1/1     Running   0          14m
kube-apiserver-master                      1/1     Running   0          14m
kube-controller-manager-master             1/1     Running   0          14m
kube-proxy-8m6gs                           1/1     Running   0          14m
kube-proxy-9j9bg                           1/1     Running   0          10m
kube-proxy-smkwz                           1/1     Running   0          11m
kube-scheduler-master                      1/1     Running   0          14m

# 集群搭建完毕
[root@master ~]# kubectl get nodes
NAME     STATUS   ROLES                  AGE   VERSION
master   Ready    control-plane,master   14m   v1.23.0
node01   Ready    <none>                 12m   v1.23.0
node02   Ready    <none>                 11m   v1.23.0

集群部署完成后,添加 nginx 配置文件,部署 nginx 应用

  • 添加 nginx 配置文件
[root@master ~]# vim nginx.yml
apiVersion: v1
kind: Pod
metadata: 
  name: nginx
  labels: 
    app: nginx
spec:
  containers:
  - name: nginx
    image: nginx:1.20.2
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
spec:
  type: NodePort
  selector:
    app: nginx
  ports:
  - port: 80
    targetPort: 80
    nodePort: 30000
  • 执行配置文件,生成nginx应用
[root@master ~]# kubectl apply -f nginx.yml
pod/nginx created
service/nginx-svc created
  • 部署成功,查看容器状态
[root@master ~]# kubectl get pod
NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          29s
  • 获取 k8s 中服务端口列表
[root@master ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        21m
nginx-svc    NodePort    10.101.21.12   <none>        80:30000/TCP   38s
  • 访问集群任意节点,访问 nginx 服务,打完收工!!
http://10.62.158.200:30000/
http://10.62.158.201:30000/
http://10.62.158.202:30000/
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐