从零到一部署Kubernetes 集群(基于Docker)

此文档适合初学Kubernetes的小白搭建集群进行学习入门

Kubernetes提供了三个特定功能的接口,kubernetes通过调用这几个接口,来完成相应的功能。
- 容器运行时接口CRI: Container Runtime Interface
kubernetes 对于容器的解决方案,只是预留了容器接口,只要符合CRI标准的解决方案都可以使用

- 容器网络接口CNI: Container Network Interface
kubernetes 对于网络的解决方案,只是预留了网络接口,只要符合CNI标准的解决方案都可以使用

- 容器存储接口CSI: Container Storage Interface
kubernetes 对于存储的解决方案,只是预留了存储接口,只要符合CSI标准的解决方案都可以使用
此接口非必须

本示例中的Kubernetes集群部署将基于以下环境进行
1、操作系统:Ubuntu2204
2、每个主机内存建议2G以上
3、Kubernetes:v1.28.8
4、CRI:0.3.11
网络环境
节点网络:10.0.0.0/24
Pod网络:10.244.0.0/16
Service网络:10.96.0.0/12

1、部署架构示例

在这里插入图片描述

IP主机名角色
10.0.0.130k8s-master01master
10.0.0.131k8s-node01worker
10.0.0.132k8s-node02worker
10.0.0.133k8s-node03worker

2、部署流程说明

使用kubeadm可以创建一个符合最佳实践的最小化Kubernetes集群来自己琢磨学习Kubernetes,使用kubeadm配置一个通过Kubernetes一致性测试的集群

  • 每个节点主机的初始换环境准备
  • 在所有Master和Node节点安装容器运行时(Containerd)
  • 在所有Master和Node节点安装Kubeadm、Kubelete和Kubectl
  • 在所有节点安装和配置CRI-Dockerd
  • 在Master节点运行Kubeadm init初始化命令,并验证Master节点状态
  • 在Master节点安装配置网络插件
  • 在所有Node节点使用Kubeadm join 命令加入集群
  • 创建容器中第一个Pod,并启动测试访问及网络通信
2.1、部署步骤

部署步骤:

​ 1、每节点部署一个选定的容器运行时;

​ docker、cri-dockerd

​ containerd/containerd.io

​ 2、每节点部署kubelet、kubeadm和kubectl;

​ 3、部署控制平面的第一个节点;

​ kubeadm init

​ 向初始化命令传递配置参数的方法:

​ (1) 命令行选项;

​ (2) 配置文件;

​ 附加步骤:部署一个选定的网络插件

​ 4、将各工作节点加入控制平面的第一个节点创建出的集群;

​ kubeadm join

3、所有主机初始化

3.1、所有主机打通key验证

配置实施ssh key 验证方便后续在节点之间同步文件传输同步文件

ssh-keygen
ssh-copy-id -i ~/.ssh/id_rsa.pub root@10.0.0.131
......
3.2、设置主机名和名称解析
hostnamectl set-hostname k8s-master01

cat > /etc/hosts <<EOF
10.0.0.130 k8s-master01.xu.com kubeapi.xu.com  k8s-master01
10.0.0.131 k8s-node01.xu.com  k8s-node01
10.0.0.132 k8s-node02.xu.com  k8s-node02
10.0.0.133 k8s-node03.xu.com  k8s-node03
EOF

for i in {130..133};do scp /etc/hosts 10.0.0.$i:/etc/hosts;done
3.3、禁用swap
swapoff -a
sed -i '/swap/s/^/#/' /etc/fstab
#或者
systemctl disable --now swap.img.swap
systemctl mask swap.target

#禁用 Swap
#禁用系统的交换空间:

swapoff -a
#要永久禁用交换空间,你需要编辑 /etc/fstab 文件,注释或删除所有与 swap 相关的行。

#启用 IP Forwarding
#为了解决 IP Forwarding 的错误,需要确保系统允许转发 IPv4 流量。你可以通过以下命令来设置和使其生效:

echo 'net.ipv4.ip_forward = 1' | sudo tee /etc/sysctl.conf
sudo sysctl -p /etc/sysctl.conf
#这将立即应用更改,并通过修改 /etc/sysctl.conf 文件来确保在系统重启后这些设置仍然有效。
3.4、时间同步
#借助于chronyd服务(程序包名称chrony)设定各节点时间精确同步
apt -y install chrony
chronyc sources -v
3.5、禁用防火墙
#禁用默认配置的iptables防火墙服务
ufw disable
ufw status
3.6、内核参数调整

如果是安装 Docker 会自动配置以下的内核参数,而无需手动实现
但是如果安装Contanerd,还需手动配置
允许 iptables 检查桥接流量,若要显式加载此模块,需运行 modprobe br_netfilter
为了让 Linux 节点的 iptables 能够正确查看桥接流量,还需要确认net.bridge.bridge-nf-call-iptables
设置为 1。

#加载模块
modprobe overlay
modprobe br_netfilter
#开机加载
cat <<EOF | tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
#设置所需的 sysctl 参数,参数在重新启动后保持不变
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF
#应用 sysctl 参数而不重新启动
sysctl --system

4、所有主机安装Docker并修改配置

配置 cgroup 驱动程序,容器运行时和 kubelet 都具有名字为 “cgroup driver” 的属性,该属性对于在 Linux 机器上管理 CGroups 而言非常重要。
警告:你需要确保容器运行时和 kubelet 所使用的是相同的 cgroup 驱动,否则 kubelet 进程会失败。

范例:

#Ubuntu20.04可以利用内置仓库安装docker
apt update
apt -y install docker.io
#自Kubernetes v1.22版本开始,未明确设置kubelet的cgroup driver时,则默认即会将其设置为
systemd。所有主机修改加速和cgroupdriver
root@k8s-master01:~# cat /etc/docker/daemon.json
{
"registry-mirrors": [
"https://docker.mirrors.ustc.edu.cn",
"https://hub-mirror.c.163.com",
"https://reg-mirror.qiniu.com",
"https://registry.docker-cn.com"
],
"exec-opts": ["native.cgroupdriver=systemd"] 
}
root@k8s-master01:~# 
[root@k8s-master01 ~]#for i in {130..133};do scp /etc/docker/demon.json 10.0.0.$i:/etc/docker/demon.json;done

#验证修改是否成功
root@k8s-master01:~# systemctl restart docker.service
root@k8s-master01:~# docker info |grep Cgroup
 Cgroup Driver: systemd
 Cgroup Version: 2

5、所有主机安装 kubeadm、kubelet 和 kubectl

通过国内镜像站点阿里云安装的参考链接:kubernetes镜像_kubernetes下载地址_kubernetes安装教程-阿里巴巴开源镜像站 (aliyun.com)

#此处使用旧版Ubuntu安装方法,所有节点均执行以下命令
root@k8s-master01:~# apt-get update && apt-get install -y apt-transport-https
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - 
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl


#查看版本信息
root@k8s-master01:~# apt-cache madison kubeadm|head
   kubeadm |  1.28.2-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
   kubeadm |  1.28.1-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
   kubeadm |  1.28.0-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
   kubeadm |  1.27.6-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
   kubeadm |  1.27.5-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
   kubeadm |  1.27.4-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
   kubeadm |  1.27.3-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
   kubeadm |  1.27.2-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
   kubeadm |  1.27.1-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
   kubeadm |  1.27.0-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages

6、所有节点安装cri-dockerd

Kubernetes自v1.24移除了对docker-shim的支持,而Docker Engine默认又不支持CRI规范,因而二者将无法直接完成整合。为此,Mirantis和Docker联合创建了cri-dockerd项目,用于为Docker Engine提 供一个能够支持到CRI规范的垫片,从而能够让Kubernetes基于CRI控制Docker 。
项目地址:https://github.com/Mirantis/cri-dockerd cri-dockerd项目提供了预制的二制格式的程序包,用户按需下载相应的系统和对应平台的版本即可完成安装。
这里以Ubuntu 2204 64bits系统环境,以及cri-dockerd目前最新的程序版本v0.3.11为例。

官网下载链接:Release v0.3.11 · Mirantis/cri-dockerd · GitHub

#以下方式可能需要访问VPN无法下载
root@k8s-master01:~#curl -Lo https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.11/cri-dockerd_0.3.11.3-0.ubuntu-jammy_amd64.deb

#如果以上方式无法下载,需要从官网直接下载安装包至本地,然后拷贝到服务器
root@k8s-master01:~# dpkg -i cri-dockerd_0.3.11.3-0.ubuntu-jammy_amd64.deb
Selecting previously unselected package cri-dockerd.
(Reading database ... 79646 files and directories currently installed.)
Preparing to unpack cri-dockerd_0.3.11.3-0.ubuntu-jammy_amd64.deb ...
Unpacking cri-dockerd (0.3.11~3-0~ubuntu-jammy) ...
Setting up cri-dockerd (0.3.11~3-0~ubuntu-jammy) ...
Created symlink /etc/systemd/system/multi-user.target.wants/cri-docker.service → /lib/systemd/system/cri-docker.service.
Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.


[root@master1 ~]#for i in {131..133};do scp cri-dockerd_0.3.11.3-0.ubuntu-jammy_amd64.deb 10.0.0.$i: ; done
#完成安装后,相应的服务cri-dockerd.service便会自动启动

7、所有主机配置 cri-dockerd

众所周知的原因,从国内 cri-dockerd 服务无法下载 k8s.gcr.io上面相关镜像,导致无法启动,所以需要修改 cri-dockerd 使用国内镜像源

vim /lib/systemd/system/cri-docker.service
[Service]
Type=notify
#ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd://
ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --pod-infra-container-image registry.aliyuncs.com/google_containers/pause:3.7    //添加此行信息
ExecReload=/bin/kill -s HUP $MAINPID

#同步至所有节点
root@k8s-master01:~# for i in {130..133};do scp /lib/systemd/system/cri-docker.service 10.0.0.$i:/lib/systemd/system/cri-docker.service;done
root@k8s-master01:~# systemctl daemon-reload && systemctl restart cri-docker.service

8、提前准备 Kubernetes 初始化所需镜像(可选)

#查看需要下载的镜像,
root@k8s-master01:~# kubeadm config images list 
I0326 16:00:42.112559   10921 version.go:256] remote version is much newer: v1.29.3; falling back to: stable-1.28
registry.k8s.io/kube-apiserver:v1.28.8
registry.k8s.io/kube-controller-manager:v1.28.8
registry.k8s.io/kube-scheduler:v1.28.8
registry.k8s.io/kube-proxy:v1.28.8
registry.k8s.io/pause:3.9
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/coredns/coredns:v1.10.1

#查看国内镜像
root@k8s-master01:~# kubeadm config images list --image-repository registry.aliyuncs.com/google_containers
I0326 16:01:24.292312   10950 version.go:256] remote version is much newer: v1.29.3; falling back to: stable-1.28
registry.aliyuncs.com/google_containers/kube-apiserver:v1.28.8
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.28.8
registry.aliyuncs.com/google_containers/kube-scheduler:v1.28.8
registry.aliyuncs.com/google_containers/kube-proxy:v1.28.8
registry.aliyuncs.com/google_containers/pause:3.9
registry.aliyuncs.com/google_containers/etcd:3.5.9-0
registry.aliyuncs.com/google_containers/coredns:v1.10.1

#从国内镜像站拉取镜像,1.24以上还需要指定--cri-socket路径   
#此命令有时也无法完成下载,如果拉取失败则继续执行下列步骤
root@k8s-master01:~#kubeadm config images pull --kubernetes-version=v1.28.8 --
image-repository registry.aliyuncs.com/google_containers   --cri-socket 
unix:///run/cri-dockerd.sockA

9、在第一个 master 节点初始化 Kubernetes 集群

root@k8s-master01:~# kubeadm init --control-plane-endpoint="kubeapi.xu.com" --kubernetes-version=v1.28.8 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --token-ttl=0   --cri-socket unix:///run/cri-dockerd.sock --image-repository registry.aliyuncs.com/google_containers --upload-certs
[init] Using Kubernetes version: v1.28.8
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0326 16:07:36.866543   11317 checks.go:835] detected that the sandbox image "registry.aliyuncs.com/google_containers/pause:3.7" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.aliyuncs.com/google_containers/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master01 kubeapi.xu.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.130]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master01 localhost] and IPs [10.0.0.130 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master01 localhost] and IPs [10.0.0.130 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 9.505726 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
c530dc025a0398b572f3ede295f1d388015265d7f6a515df4652f6d7c873581c
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: uzw65t.fmdzh8zgnoshfrfp
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join kubeapi.xu.com:6443 --token uzw65t.fmdzh8zgnoshfrfp \
	--discovery-token-ca-cert-hash sha256:62c5e06e2c5e0e89ea0a4e8af6e901368b5e2fdb46cb9e4d0363f9bab0cc0763 \
	--control-plane --certificate-key c530dc025a0398b572f3ede295f1d388015265d7f6a515df4652f6d7c873581c

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join kubeapi.xu.com:6443 --token uzw65t.fmdzh8zgnoshfrfp \
	--discovery-token-ca-cert-hash sha256:62c5e06e2c5e0e89ea0a4e8af6e901368b5e2fdb46cb9e4d0363f9bab0cc0763 

#如果有工作节点,先在工作节点执行,再在control节点执行下面操作
kubeadm reset -f --cri-socket unix:///run/cri-dockerd.sock
rm -rf /etc/cni/net.d/  $HOME/.kube/config
reboot
#查看以拉取的镜像
root@k8s-master01:~# docker images
REPOSITORY                                                        TAG       IMAGE ID       CREATED         SIZE
registry.aliyuncs.com/google_containers/kube-apiserver            v1.28.8   e70a71eaa560   11 days ago     125MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.28.8   e5ae3e4dc656   11 days ago     121MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.28.8   ad3260645145   11 days ago     59.3MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.28.8   5ce97277076c   11 days ago     81.2MB
registry.aliyuncs.com/google_containers/etcd                      3.5.9-0   73deb9a3f702   10 months ago   294MB
registry.aliyuncs.com/google_containers/coredns                   v1.10.1   ead0a4a53df8   13 months ago   53.6MB
registry.aliyuncs.com/google_containers/pause                     3.9       e6f181688397   17 months ago   744kB
registry.aliyuncs.com/google_containers/pause                     3.7       221177c6082a   2 years ago     711kB

10、在第一个 master 节点生成 kubectl 命令的授权文件

kubectl是kube-apiserver的命令行客户端程序,实现了除系统部署之外的几乎全部的管理操作,是 kubernetes管理员使用最多的命令之一。kubectl需经由API server认证及授权后方能执行相应的管理操 作,kubeadm部署的集群为其生成了一个具有管理员权限的认证配置文 件/etc/kubernetes/admin.conf,它可由kubectl通过默认的“$HOME/.kube/config”的路径进行加载。 当然,用户也可在kubectl命令上使用–kubeconfig选项指定一个别的位置。 下面复制认证为Kubernetes系统管理员的配置文件至目标用户(例如当前用户root)的家目录下:


  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
  
root@k8s-master01:~# mkdir -p $HOME/.kube
root@k8s-master01:~# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
root@k8s-master01:~# sudo chown $(id -u):$(id -g) $HOME/.kube/config

11、实现 kubectl 命令补全

#kubectl 命令功能丰富,默认不支持命令补会,可以用下面方式实现
kubectl completion bash > /etc/profile.d/kubectl_completion.sh
. /etc/profile.d/kubectl_completion.sh
exit

12、在第一个 master 节点配置网络组件

Kubernetes系统上Pod网络的实现依赖于第三方插件进行,这类插件有近数十种之多,较为著名的有 flannel、calico、canal和kube-router等,简单易用的实现是为CoreOS提供的flannel项目。
下面的命令 用于在线部署flannel至Kubernetes系统之上
首先,下载适配系统及硬件平台环境的flanneld至每个节点,并放置于/opt/bin/目录下。
我们这里选用 flanneld-amd64,目前最新的版本为v0.24.4,因而,我们需要在集群的每个节点上执行如下命令

提示:下载flanneld的地址为 链接: flannel
随后,在初始化的第一个master节点k8s-master01上运行如下命令,向Kubernetes部署kubeflannel。

#默认没有网络插件,所以显示如下状态
root@k8s-master01:~# kubectl get nodes
NAME           STATUS     ROLES           AGE   VERSION
k8s-master01   NotReady   control-plane   22h   v1.28.2
k8s-node01     NotReady   <none>          38s   v1.28.2
k8s-node02     NotReady   <none>          24s   v1.28.2
k8s-node03     NotReady   <none>          21s   v1.28.2

root@k8s-master01:~# kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

13、将所有 worker 节点加入 Kubernetes 集群

在所有worker节点执行下面操作,加上集群

root@k8s-node01:~# kubeadm join kubeapi.xu.com:6443 --token uzw65t.fmdzh8zgnoshfrfp     --discovery-token-ca-cert-hash sha256:62c5e06e2c5e0e89ea0a4e8af6e901368b5e2fdb46cb9e4d0363f9bab0cc0763 --cri-socket unix:///run/cri-dockerd.sock
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

root@k8s-node01:~# docker images
REPOSITORY                                           TAG               IMAGE ID       CREATED        SIZE
registry.aliyuncs.com/google_containers/kube-proxy   v1.28.8           5ce97277076c   12 days ago    81.2MB
flannel/flannel-cni-plugin                           v1.4.0-flannel1   77c1250c26d9   2 months ago   9.87MB
registry.aliyuncs.com/google_containers/pause        3.7               221177c6082a   2 years ago    711kB


root@k8s-master01:~# kubectl get nodes -o wide
NAME           STATUS     ROLES           AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
k8s-master01   Ready      control-plane   23h   v1.28.2   10.0.0.130    <none>        Ubuntu 22.04.3 LTS   5.15.0-91-generic   docker://24.0.5
k8s-node01     Ready      <none>          65m   v1.28.2   10.0.0.131    <none>        Ubuntu 22.04.3 LTS   5.15.0-91-generic   docker://24.0.5
k8s-node02     NotReady   <none>          64m   v1.28.2   10.0.0.132    <none>        Ubuntu 22.04.3 LTS   5.15.0-91-generic   docker://24.0.5
k8s-node03     Ready      <none>          64m   v1.28.2   10.0.0.133    <none>        Ubuntu 22.04.3 LTS   5.15.0-91-generic   docker://24.0.5
root@k8s-master01:~# 

14、测试应用编排及服务访问

至此一个master附带有三个worker的kubernetes集群基础设施已经部署完成,用户随后即可测试其核 心功能。 demoapp是一个web应用,可将demoapp以Pod的形式编排运行于集群之上,并通过在集群外部进行访 问

#创建三个Pod示例
root@k8s-master01:~# kubectl create deployment demoapp --image=ikubernetes/demoapp:v1.0 --replicas=3
deployment.apps/demoapp created
root@k8s-master01:~# kubectl get nods -o wide
error: the server doesn't have a resource type "nods"
root@k8s-master01:~# kubectl get nodes -o wide
NAME           STATUS   ROLES           AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
k8s-master01   Ready    control-plane   23h   v1.28.2   10.0.0.130    <none>        Ubuntu 22.04.3 LTS   5.15.0-91-generic   docker://24.0.5
k8s-node01     Ready    <none>          68m   v1.28.2   10.0.0.131    <none>        Ubuntu 22.04.3 LTS   5.15.0-91-generic   docker://24.0.5
k8s-node02     Ready    <none>          68m   v1.28.2   10.0.0.132    <none>        Ubuntu 22.04.3 LTS   5.15.0-91-generic   docker://24.0.5
k8s-node03     Ready    <none>          68m   v1.28.2   10.0.0.133    <none>        Ubuntu 22.04.3 LTS   5.15.0-91-generic   docker://24.0.5
root@k8s-master01:~# kubectl get pods -o wide
NAME                      READY   STATUS             RESTARTS   AGE   IP           NODE         NOMINATED NODE   READINESS GATES
demoapp-7c58cd6bb-bjnbd   1/1     Running            0          10m   10.244.2.2   k8s-node02   <none>           <none>
demoapp-7c58cd6bb-spg7n   1/1     Running            0          10m   10.244.1.2   k8s-node01   <none>           <none>
demoapp-7c58cd6bb-zbs8t   0/1     ImagePullBackOff   0          10m   10.244.3.4   k8s-node03   <none>           <none>
root@k8s-master01:~#  

#访问测试
root@k8s-master01:~#  curl  10.244.2.2
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-7c58cd6bb-bjnbd, ServerIP: 10.244.2.2!
root@k8s-master01:~#  curl  10.244.1.2
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-7c58cd6bb-spg7n, ServerIP: 10.244.1.2!
root@k8s-master01:~# curl 10.244.3.4
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-7c58cd6bb-zbs8t, ServerIP: 10.244.3.4!





1)使用如下命令了解Service对象demoapp使用的NodePort,格式:<集群端口>:<POd端口>,以便于在集群外部进行访问
root@k8s-master01:~# kubectl create service nodeport demoapp --tcp=80:80
service/demoapp created
root@k8s-master01:~# kubectl get service
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
demoapp      NodePort    10.103.88.140   <none>        80:32492/TCP   15s
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        24h
root@k8s-master01:~# 

root@k8s-master01:~# while true ;do curl 10.103.88.140;sleep 1;done
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-7c58cd6bb-spg7n, ServerIP: 10.244.1.2!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-7c58cd6bb-bjnbd, ServerIP: 10.244.2.2!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-7c58cd6bb-bjnbd, ServerIP: 10.244.2.2!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-7c58cd6bb-spg7n, ServerIP: 10.244.1.2!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-7c58cd6bb-zbs8t, ServerIP: 10.244.3.4!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-7c58cd6bb-bjnbd, ServerIP: 10.244.2.2!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-7c58cd6bb-bjnbd, ServerIP: 10.244.2.2!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-7c58cd6bb-spg7n, ServerIP: 10.244.1.2!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-7c58cd6bb-zbs8t, ServerIP: 10.244.3.4!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-7c58cd6bb-zbs8t, ServerIP: 10.244.3.4!
2)用户可以于集群外部通过“http://NodeIP:32492”这个URL访问demoapp上的应用

例如于集群外通过浏览器访问“http://“kubernetes-node”:30037”。

root@k8s-master01:~# curl 10.0.0.131:32492
iKubernetes demoapp v1.0 !! ClientIP: 10.244.1.1, ServerName: demoapp-7c58cd6bb-spg7n, ServerIP: 10.244.1.2!
root@k8s-master01:~# curl 10.0.0.131:32492
iKubernetes demoapp v1.0 !! ClientIP: 10.244.1.0, ServerName: demoapp-7c58cd6bb-bjnbd, ServerIP: 10.244.2.2!
root@k8s-master01:~# curl 10.0.0.131:32492
iKubernetes demoapp v1.0 !! ClientIP: 10.244.1.0, ServerName: demoapp-7c58cd6bb-zbs8t, ServerIP: 10.244.3.4!
3) 实现自动扩容
```bash
#实现自动扩容操作
#从起初3个Pod增加至6个
root@k8s-master01:~# kubectl scale deployment demoapp --replicas=6
deployment.apps/demoapp scaled
root@k8s-master01:~# kubectl get pods -o wide 
NAME                      READY   STATUS    RESTARTS   AGE   IP           NODE         NOMINATED NODE   READINESS GATES
demoapp-7c58cd6bb-bjnbd   1/1     Running   0          37m   10.244.2.2   k8s-node02   <none>           <none>
demoapp-7c58cd6bb-qvr2q   1/1     Running   0          20s   10.244.2.3   k8s-node02   <none>           <none>
demoapp-7c58cd6bb-rgm2c   1/1     Running   0          20s   10.244.1.3   k8s-node01   <none>           <none>
demoapp-7c58cd6bb-spg7n   1/1     Running   0          37m   10.244.1.2   k8s-node01   <none>           <none>
demoapp-7c58cd6bb-t62tx   1/1     Running   0          20s   10.244.3.5   k8s-node03   <none>           <none>
demoapp-7c58cd6bb-zbs8t   1/1     Running   0          37m   10.244.3.4   k8s-node03   <none>           <none>

4、自动缩容
#将6个Pod缩容至3个Pod
root@k8s-master01:~# kubectl scale deployment demoapp --replicas=3
deployment.apps/demoapp scaled
5)实现故障自愈

在Pod运行过程中当某个Pod出现故障无法启动时将自动创建一个新的Pod来顶替

#调度持续进行中删除相关节点,查看pod自动自愈过程
root@k8s-master01:~# while true ;do curl 10.103.88.140;sleep 1;done
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-7c58cd6bb-rgm2c, ServerIP: 10.244.1.3!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-7c58cd6bb-bjnbd, ServerIP: 10.244.2.2!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-7c58cd6bb-zbs8t, ServerIP: 10.244.3.4!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-7c58cd6bb-rgm2c, ServerIP: 10.244.1.3!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-7c58cd6bb-t62tx, ServerIP: 10.244.3.5!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-7c58cd6bb-rgm2c, ServerIP: 10.244.1.3!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-7c58cd6bb-zbs8t, ServerIP: 10.244.3.4!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-7c58cd6bb-zbs8t, ServerIP: 10.244.3.4!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-7c58cd6bb-zbs8t, ServerIP: 10.244.3.4!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-7c58cd6bb-qvr2q, ServerIP: 10.244.2.3!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-7c58cd6bb-qvr2q, ServerIP: 10.244.2.3!

root@k8s-master01:~# kubectl get pods -o wide 
NAME                      READY   STATUS    RESTARTS   AGE     IP           NODE         NOMINATED NODE   READINESS GATES
demoapp-7c58cd6bb-bjnbd   1/1     Running   0          41m     10.244.2.2   k8s-node02   <none>           <none>
demoapp-7c58cd6bb-qvr2q   1/1     Running   0          4m50s   10.244.2.3   k8s-node02   <none>           <none>
demoapp-7c58cd6bb-rgm2c   1/1     Running   0          4m50s   10.244.1.3   k8s-node01   <none>           <none>
demoapp-7c58cd6bb-spg7n   1/1     Running   0          41m     10.244.1.2   k8s-node01   <none>           <none>
demoapp-7c58cd6bb-t62tx   1/1     Running   0          4m50s   10.244.3.5   k8s-node03   <none>           <none>
demoapp-7c58cd6bb-zbs8t   1/1     Running   0          41m     10.244.3.4   k8s-node03   <none>           <none>

root@k8s-master01:~# kubectl delete pods demoapp-7c58cd6bb-bjnbd
pod "demoapp-7c58cd6bb-bjnbd" deleted

#删除node节点demoapp-7c58cd6bb-bjnbd 10.244.2.2 后,服务调度将暂时不在对该pod进行调度,此时pod将自动添加一个新的节点并会添加至访问调度的服务列表中,后续添加节点为demoapp-7c58cd6bb-v8nv5 10.244.2.4
root@k8s-master01:~# kubectl get pods -o wide 
NAME                      READY   STATUS    RESTARTS   AGE     IP           NODE         NOMINATED NODE   READINESS GATES
demoapp-7c58cd6bb-qvr2q   1/1     Running   0          6m15s   10.244.2.3   k8s-node02   <none>           <none>
demoapp-7c58cd6bb-rgm2c   1/1     Running   0          6m15s   10.244.1.3   k8s-node01   <none>           <none>
demoapp-7c58cd6bb-spg7n   1/1     Running   0          43m     10.244.1.2   k8s-node01   <none>           <none>
demoapp-7c58cd6bb-t62tx   1/1     Running   0          6m15s   10.244.3.5   k8s-node03   <none>           <none>
demoapp-7c58cd6bb-v8nv5   1/1     Running   0          46s     10.244.2.4   k8s-node02   <none>           <none>
demoapp-7c58cd6bb-zbs8t   1/1     Running   0          43m     10.244.3.4   k8s-node03   <none>           <none>
root@k8s-master01:~# 

6)实现滚动更新
#实现滚动更新
root@k8s-master01:~kubectl set image deployment/demoapp demoapp=ikubernetes/demoapp:v1.1  && kubectl rollout status deployment/demoapppp
Waiting for deployment "demoapp" rollout to finish: 3 out of 6 new replicas have been updated...
Waiting for deployment "demoapp" rollout to finish: 3 out of 6 new replicas have been updated...
Waiting for deployment "demoapp" rollout to finish: 3 out of 6 new replicas have been updated...
Waiting for deployment "demoapp" rollout to finish: 3 out of 6 new replicas have been updated...
........
Waiting for deployment "demoapp" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "demoapp" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "demoapp" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "demoapp" rollout to finish: 5 of 6 updated replicas are available...
deployment "demoapp" successfully rolled out
root@k8s-master01:~# 
.

root@k8s-master01:~# while true ;do curl 10.103.88.140;sleep 1;done
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-7c58cd6bb-t62tx, ServerIP: 10.244.3.5!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-7c58cd6bb-rgm2c, ServerIP: 10.244.1.3!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-7c58cd6bb-qvr2q, ServerIP: 10.244.2.3!
iKubernetes demoapp v1.1 !! ClientIP: 10.244.0.0, ServerName: demoapp-5894dd4b47-v656r, ServerIP: 10.244.3.7!
iKubernetes demoapp v1.1 !! ClientIP: 10.244.0.0, ServerName: demoapp-5894dd4b47-v656r, ServerIP: 10.244.3.7!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-7c58cd6bb-spg7n, ServerIP: 10.244.1.2!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-7c58cd6bb-zbs8t, ServerIP: 10.244.3.4!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-7c58cd6bb-spg7n, ServerIP: 10.244.1.2!
iKubernetes demoapp v1.1 !! ClientIP: 10.244.0.0, ServerName: demoapp-5894dd4b47-v656r, ServerIP: 10.244.3.7!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-7c58cd6bb-t62tx, ServerIP: 10.244.3.5!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-7c58cd6bb-zbs8t, ServerIP: 10.244.3.4!
....
ServerIP: 10.244.1.5!
iKubernetes demoapp v1.1 !! ClientIP: 10.244.0.0, ServerName: demoapp-5894dd4b47-ldmrj, ServerIP: 10.244.3.8!
iKubernetes demoapp v1.1 !! ClientIP: 10.244.0.0, ServerName: demoapp-5894dd4b47-sjdjx, ServerIP: 10.244.1.5!
Logo

权威|前沿|技术|干货|国内首个API全生命周期开发者社区

更多推荐