第十一章 搭建 Kubernetes集群

1.前期准备 (三台设备都需要配置)

1.准备3台虚拟机,虚拟机配置如下:
IP操作系统主机名网络配置
maker 主节点机192.168.10.136CentOS 7.9chenyujieNAT或桥接2vcpu/4G内存/20G硬盘
node 节点机192.168.10.136CentOS 7.9chenyujie2NAT或桥接2vcpu/4G内存/20G硬盘
node 节点机192.168.10.136CentOS 7.9chenyujie3NAT或桥接2vcpu/4G内存/20G硬盘

注: #cat /etc/redhat-release 可以查看系统版本号
注:配置必须要大,不然后面会报错。

2.配置静态IP

修改配置文件 需要添加 DNS=114.114.114.114

[root@chenyujie ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens33 
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="static"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
UUID="2ebb5440-1e41-4663-b25f-46af28e168db"
DEVICE="ens33"
ONBOOT="yes"
IPADDR="192.168.10.136"
NETMASK="255.255.255.0"
GATEWAY="192.168.10.2"
DNS1=8.8.8.8
DNS2=114.114.114.114

以后重启网络服务(systemctl restart network)才能使配置生效。

3.设置系统主机名称

hostnamectl set-hostname +自己想要修改的名字

4、配置 hosts 文件
[root@chenyujie ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
199.232.68.133 raw.githubusercontent.com
140.82.114.4    github.com
192.168.10.136 chenyujie
192.168.10.137 chenyujie2
192.168.10.138 chenyujie3
5、修改 yum 源

1.备份原来的yum源

mv /etc/yum.repos.d/CentOS-Base.repo  /etc/yum.repos.d/CentOS-Base.repo.backup

2.yum源更新

curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo

3.配置安装 K8s 需要的 yum 源

# cat /etc/yum.repos.d/kubernetes.repo

[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

4.清理 yum 源

[root@chenyujie ~]# yum clean all

5.生成新的 yum 缓存

[root@chenyujie ~]# yum makecache fast

6.更新 yum 源

[root@chenyujie ~]# yum -y update
6.关闭防火墙并关闭防火墙自启
[root@chenyujie ~]# systemctl stop firewalld
[root@chenyujie ~]# systemctl disable firewalld
7.同步时间
[root@chenyujie ~]# yum install ntpdate -y
已加载插件:fastestmirror
Determining fastest mirrors
[root@chenyujie ~]# ntpdate time.windows.com
 8 May 10:42:08 ntpdate[1466]: adjust time server 40.81.94.65 offset 0.002024 sec
8、关闭 selinux
[root@chenyujie3 ~]# sestatus
#查看状态
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             targeted
Current mode:                   enforcing
Mode from config file:          enforcing
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Max kernel policy version:      31
[root@chenyujie ~]# setenforce 0
#临时关闭状态
[root@chenyujie ~]# sed -i 's/enforcing/disabled/' /etc/selinux/config
#永久关闭状态
[root@chenyujie ~]# sestatus
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             targeted
Current mode:                   permissive
Mode from config file:          disabled
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Max kernel policy version:      31
[root@chenyujie ~]# 

注:selinux 需要重启才可以生效

9、关闭 swap
[root@chenyujie ~]# swapoff -a         #临时关闭

[root@chenyujie ~]# vi /etc/fstab     #永久关闭
 用 “# ” 注释/dev/mapper/cs-swap     none                    swap    defaults        0 0
[root@chenyujie ~]# free -h           #查看状态
              total        used        free      shared  buff/cache   available
Mem:           972M        253M        152M         13M        567M        552M
Swap:            0B          0B          0B
10、修改内核参数
[root@chenyujie sysctl.d]# cat /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
[root@chenyujie sysctl.d]# sysctl --system
#生效命令
* Applying /usr/lib/sysctl.d/00-system.conf ...
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
kernel.kptr_restrict = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
* Applying /etc/sysctl.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
11、服务器免密码登录
[root@chenyujie ~]# ssh-keygen -t rsa          #三次空格
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:Usy9OO0esMEtd9t/r2KQwLgxaK66iUdFfCZ1za82y2o root@chenyujie
The key's randomart image is:
+---[RSA 2048]----+
|   . .. .o       |
|    + o+ .o      |
|   . = o+ ..     |
|    + +oo+ ..    |
|   +  .+S.=o.    |
|  . . .. O*. o   |
| . .    .oo+. .  |
|..o     E.o.o  ..|
|=+     ..... ...=|
+----[SHA256]-----+
[root@chenyujie ~]# scp -p ~/.ssh/id_rsa.pub root@192.168.10.136:/root/.ssh/authorized_keys
The authenticity of host '192.168.10.136 (192.168.10.136)' can't be established.
ECDSA key fingerprint is SHA256:ky3ICaeXuOy6jluv+95SqIaQitX8dmkC1i+7db7vC8w.
ECDSA key fingerprint is MD5:c2:3b:6c:87:5c:dd:29:22:f2:bb:f1:78:56:ab:b4:48.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.10.136' (ECDSA) to the list of known hosts.
root@192.168.10.136's password: 
id_rsa.pub           100%  396   257.3KB/s   00:00    
#三台都需要传送密钥
[root@chenyujie ~]# ssh 192.168.10.137
Last login: Wed May  8 10:24:37 2024 from 192.168.10.1
[root@chenyujie2 ~]# exit
登出
Connection to 192.168.10.137 closed.
[root@chenyujie ~]# 

注:Warning: Permanently added ‘192.168.10.137’ (ECDSA) to the list of known hosts. 只是另一台密钥没有传送过去

2.安装 Docker

​ 具体安装步骤看我以前的文章。

3.开启 bridge 模式

(在 K8s 的每一个节点机执行命令)

桥接网络的配置相对麻烦,能够在局域网之间访问,能上网,应用广。

#永久开启
[root@chenyujie ~]# cat /etc/sysctl.conf 
# sysctl settings are defined through files in
# /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.
#
# Vendors settings live in /usr/lib/sysctl.d/.
# To override a whole file, create a new file with the same in
# /etc/sysctl.d/ and put new settings there. To override
# only specific settings, add a file with a lexically later
# name in /etc/sysctl.d/ and put new settings there.
#
# For more information, see sysctl.conf(5) and sysctl.d(5).

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

vm.swappiness = 0
net.ipv4.ip_forward = 1
#配置生效
[root@chenyujie ~]# sysctl -p
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness = 0
net.ipv4.ip_forward = 1
[root@chenyujie ~]# lsmod | grep br_netfilter
br_netfilter           22256  0 
bridge                155432  1 br_netfilter

4.开启 ipvs

(在 K8s 的每一个节点机执行命令)

​ 启用 ipvs 而不使用 iptables 的原因,因为我们在用到 K8s 的时候,会用到数据包转发,如果不开启 ipvs 将会使用 iptables,但是效率低,所以官网推荐需要开通 ipvs 内核,在 K8s 的各个节点都需要开启。

[root@chenyujie ~]# cat /etc/sysconfig/modules/ipvs.modules 
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in \${ipvs_modules}; do
 /sbin/modinfo -F filename \${kernel_module} > /dev/null 2>&1
 if [ $? -eq 0 ]; then
 /sbin/modprobe \${kernel_module}
 fi
done
# 变成可执行文件,让这个配置生效
[root@chenyujie ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules

5. 启动docker并设置开机自启

(在 K8s 的每一个节点机执行命令)

由于后面下载的K8s为kubeadm-1.23.0,这个版本需要的docker为20.10.6 。

但是本机的docker为 26.1.1 所以需要降低docker的版本

[root@chenyujie2 ~]# yum remove *docker*
#清除原来的docker
[root@chenyujie2 ~]# yum install docker-ce-20.10.6 docker-ce-cli-20.10.6 -y
[root@chenyujie2 ~]# docker version
#查看docker版本号
Client: Docker Engine - Community
 Version:           20.10.6
 API version:       1.41
 Go version:        go1.13.15
 Git commit:        370c289
 Built:             Fri Apr  9 22:45:33 2021
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.6
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       8728dd2
  Built:            Fri Apr  9 22:43:57 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.6.31
  GitCommit:        e377cd56a71523140ca6ae87e30244719194a521
 runc:
  Version:          1.1.12
  GitCommit:        v1.1.12-0-g51d5e94
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

Docker配置加速源:

[root@chenyujie ~]# cat /etc/docker/daemon.json
{
  "registry-mirrors": ["https://t81qmnz6.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}
[root@chenyujie ~]# systemctl enable docker --now
[root@chenyujie ~]# systemctl status docker
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since 一 2024-05-13 17:04:05 CST; 6min ago
     Docs: https://docs.docker.com
 Main PID: 4355 (dockerd)
   CGroup: /system.slice/docker.service
           └─4355 /usr/bin/dockerd -H fd:// --contai...

513 17:03:21 chenyujie2 systemd[1]: Starting Doc...
513 17:03:26 chenyujie2 dockerd[4355]: time="202...
5月 13 17:03:32 chenyujie2 dockerd[4355]: time="202...
513 17:03:57 chenyujie2 dockerd[4355]: time="202...
5月 13 17:04:01 chenyujie2 dockerd[4355]: time="202...
513 17:04:01 chenyujie2 dockerd[4355]: time="202...
5月 13 17:04:03 chenyujie2 dockerd[4355]: time="202...
513 17:04:03 chenyujie2 dockerd[4355]: time="202...
5月 13 17:04:05 chenyujie2 dockerd[4355]: time="202...
513 17:04:05 chenyujie2 systemd[1]: Started Dock...
Hint: Some lines were ellipsized, use -l to show in full.
[root@chenyujie2 ~]# reboot

6.部署 K8s 集群

1.目前生产部署 Kubernetes 集群主要有两种方式:
  • Kubeadm:Kubeadm 是一个 K8s 部署工具,提供 kubeadm init 和 kubeadm join,用于快速部署 Kubernetes 集群。
  • 二进制:从 github下 载发行版的二进制包,手动部署每个组件,组成 Kubernetes 集群。

​ 本次使用 Kubeadm 的方式搭建集群,建议在生产中部署 Kubernetes 集群使用二进制方式,Kubeadm 快速帮我们搭建好不容易发现问题,比如配置问题。

2.所有主机安装 Kubeadm,Kubelet 和 Kubectl

由于版本更新频繁,这里指定版本号部署

[root@chenyujie ~]# yum install -y kubelet-1.23.0 kubeadm-1.23.0 kubectl-1.23.0
 #安装完成以后不要启动,设置开机自启动即可
[root@chenyujie ~]# systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@chenyujie ~]# 

查看是否安装成功

[root@chenyujie ~]# kubelet --version
Kubernetes v1.23.0

启动 kubelet

[root@chenyujie ~]#  systemctl daemon-reload
[root@chenyujie ~]#  systemctl start kubelet
[root@chenyujie ~]#  systemctl enable kubelet

拉取init-config配置,并修改

[root@chenyujie ~]# kubeadm config print init-defaults > init-config.yaml

[root@chenyujie ~]# cat  init-config.yaml 
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.10.136  #master节点IP地址
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  imagePullPolicy: IfNotPresent
  name: chenyujie #master节点node的名称
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers  #修改为阿里云地址
kind: ClusterConfiguration
kubernetesVersion: 1.23.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
scheduler: {}
3.初始化 K8s 集群,在 K8s 的 master 节点操作

拉取k8s相关镜像

[root@chenyujie ~]# kubeadm config images pull --config=init-config.yaml
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.23.0
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.6
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.1-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.8.6

初始化 K8s 集群

注意 --apiserver-advertise-addres 的 IP 必须指定为 master 的 IP

[root@chenyujie ~]# kubeadm init --apiserver-advertise-address=192.168.10.136 --apiserver-bind-port=6443 --pod-network-cidr=10.244.0.0/16  --service-cidr=10.96.0.0/12 --kubernetes-version=1.23.0 --image-repository registry.aliyuncs.com/google_containers --ignore-preflight-errors=all
[init] Using Kubernetes version: v1.23.0
[preflight] Running pre-flight checks
        [WARNING NumCPU]: the number of available CPUs 1 is less than the required 2
        [WARNING Mem]: the system RAM (972 MB) is less than the minimum 1700 MB
        [WARNING Port-6443]: Port 6443 is in use
        [WARNING Port-10259]: Port 10259 is in use
        [WARNING Port-10257]: Port 10257 is in use
        [WARNING FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
        [WARNING FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
        [WARNING FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
        [WARNING FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
        [WARNING Port-10250]: Port 10250 is in use
        [WARNING Port-2379]: Port 2379 is in use
        [WARNING Port-2380]: Port 2380 is in use
        [WARNING DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 0.038563 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node chenyujie as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node chenyujie as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: eq05ta.a3x3ujald33lrk66
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.10.136:6443 --token eq05ta.a3x3ujald33lrk66 \
        --discovery-token-ca-cert-hash sha256:003a9579260dcb869e0d0eae22bfd6f4f3ad8a3b5b8ab93f20e3725af4cc5a3e 
[root@chenyujie ~]# 

  • apiserver-advertise-address:指定 Maste r的那个 IP 地址与其他节点通信
  • image-repository:由于默认拉取镜像地址 k8s.gcr.io 国内无法访问,这里指定阿里云镜像仓库地址
  • kubernetes-version :K8s 版本,与上面安装的一致,使用命令查看:rpm -q kubeadm
  • service-cidr:集群内部虚拟网络,Pod 统一访问入口
  • pod-network-cidr:Pod 网络,与下面部署的 CNI 网络组件 yaml 中保持一致
  • ignore-preflight-errors:忽视告警错误,加上 --ignore-preflight-errors=all 参数即可
4. 配置kubectl的配置文件config,相当于对kubectl进行授权,这样kubectl命令可以使用这个证书对k8s集群进行管理
[root@chenyujie ~]# mkdir -p $HOME/.kube
[root@chenyujie ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@chenyujie ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@chenyujie ~]# kubectl get nodes
NAME        STATUS     ROLES                  AGE   VERSION
chenyujie   NotReady   control-plane,master   97m   v1.23.0

其他两台节点机也需要执行(2和3都需要做)

[root@chenyujie3 ~]#  mkdir -p $HOME/.kube
[root@chenyujie3 ~]#  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
cp: 无法获取"/etc/kubernetes/admin.conf" 的文件状态(stat): 没有那个文件或目录
#如果出现这种错误
[root@chenyujie ~]# scp /etc/kubernetes/admin.conf root@192.168.10.138:/etc/kubernetes/admin.conf
root@192.168.10.138's password: 
admin.conf           100% 5642     1.9MB/s   00:00    
[root@chenyujie ~]# scp /etc/kubernetes/admin.conf root@192.168.10.137:/etc/kubernetes/admin.conf
root@192.168.10.137's password: 
admin.conf           100% 5642   413.2KB/s   00:00    

[root@chenyujie3 ~]#  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@chenyujie3 ~]#   sudo chown $(id -u):$(id -g) $HOME/.kube/config
5.将 k8s-chenyujie2 和 k8s-chenyujie3 加入到集群

验证使用可以使用 kubectl 命令

[root@chenyujie ~]# kubeadm token create --print-join-command
kubeadm join 192.168.10.136:6443 --token tgd4rb.uejzws0q4w29l59n --discovery-token-ca-cert-hash sha256:003a9579260dcb869e0d0eae22bfd6f4f3ad8a3b5b8ab93f20e3725af4cc5a 

将接下来把 node 节点加入 Kubernetes master 中,在 Node 主机上执行

节点机chenyujie3

[root@chenyujie3 ~]# kubeadm join 192.168.10.136:6443 --token 1hamwk.7pfehx5ki8xzp280 --discovery-token-ca-cert-hash sha256:003a9579260dcb869e0d0eae22bfd6f4f3ad8a3b5b8ab93f20e3725af4cc5a3e --ignore-preflight-errors=all
[preflight] Running pre-flight checks
        [WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
        [WARNING Port-10250]: Port 10250 is in use
        [WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

节点机chenyujie2

[root@chenyujie2 ~]# kubeadm join 192.168.10.136:6443 --token 879i40.vd5e5sdge237dnat --discovery-token-ca-cert-hash sha256:003a9579260dcb869e0d0eae22bfd6f4f3ad8a3b5b8ab93f20e3725af4cc5a3e 
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
        [ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
[root@chenyujie2 ~]# kubeadm join 192.168.10.136:6443 --token 879i40.vd5e5sdge237dnat --discovery-token-ca-cert-hash sha256:003a9579260dcb869e0d0eae22bfd6f4f3ad8a3b5b8ab93f20e3725af4cc5a3e --ignore-preflight-errors=all
[preflight] Running pre-flight checks
        [WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
        [WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
#提示 This node join the cluster,意味着 node 节点初始化完成。

默认的 token 有效期为 24 小时,当过期之后,该 token 就不能用了,这时可以使用如下的命令创建 token

[root@chenyujie3 ~]# kubeadm token create --print-join-command
kubeadm join 192.168.10.136:6443 --token 22x58l.0dghm4wy8srs0amo --discovery-token-ca-cert-hash sha256:003a9579260dcb869e0d0eae22bfd6f4f3ad8a3b5b8ab93f20e3725af4cc5a3e
#创建一个永不过期的 token
[root@chenyujie3 ~]# kubeadm token create --ttl 0
r83s8a.0xq9a26s7nwdjg4u
[root@chenyujie2 ~]# kubeadm token create --print-join-command
kubeadm join 192.168.10.136:6443 --token f7dxgp.c7pxp6vxvz24ob4v --discovery-token-ca-cert-hash sha256:003a9579260dcb869e0d0eae22bfd6f4f3ad8a3b5b8ab93f20e3725af4cc5a3e 
[root@chenyujie2 ~]# kubeadm token create --ttl 0
6frqrv.hfkbt2psa9i1ak1c
6.验证集群

使用 kubectl get nodes 命令查看

[root@chenyujie ~]#  kubectl get nodes
NAME         STATUS     ROLES                  AGE     VERSION
chenyujie    Ready      control-plane,master   3d22h   v1.23.0
chenyujie2   NotReady   <none>                 107s    v1.23.0
chenyujie3   NotReady   <none>                 109s    v1.23.0

给两台node节点打上标签,在masker(chenyujie)上

增加节点标签 注 =:代表增加标签

[root@chenyujie ~]# kubectl label nodes chenyujie2 node-role.kubernetes.io/work=work
node/chenyujie2 labeled
[root@chenyujie ~]# kubectl label nodes chenyujie3 node-role.kubernetes.io/work=work
node/chenyujie3 labeled

再次使用kubectl get nodes 命令查看

[root@chenyujie ~]#  kubectl get nodes
NAME         STATUS   ROLES                  AGE     VERSION
chenyujie    Ready    control-plane,master   3d22h   v1.23.0
chenyujie2   Ready    work                   14m     v1.23.0
chenyujie3   Ready    work                   3d19h   v1.23.0

查看所有命名空间的所有 pod

[root@chenyujie ~]# kubectl get pods --all-namespaces -o wide
NAMESPACE      NAME                                READY   STATUS    RESTARTS         AGE     IP               NODE         NOMINATED NODE   READINESS GATES
kube-flannel   kube-flannel-ds-8zrlm               1/1     Running   0                14m     192.168.10.137   chenyujie2   <none>           <none>
kube-flannel   kube-flannel-ds-hpl2k               1/1     Running   2 (44m ago)      3d19h   192.168.10.138   chenyujie3   <none>           <none>
kube-flannel   kube-flannel-ds-qzlh9               1/1     Running   0                3d20h   192.168.10.136   chenyujie    <none>           <none>
kube-system    coredns-6d8c4cb4d-bkjrh             1/1     Running   1 (45m ago)      3d22h   10.244.1.5       chenyujie3   <none>           <none>
kube-system    coredns-6d8c4cb4d-xpp6d             1/1     Running   1 (45m ago)      3d22h   10.244.1.4       chenyujie3   <none>           <none>
kube-system    etcd-chenyujie                      1/1     Running   2 (9m26s ago)    3d22h   192.168.10.136   chenyujie    <none>           <none>
kube-system    kube-apiserver-chenyujie            1/1     Running   2 (9m18s ago)    3d22h   192.168.10.136   chenyujie    <none>           <none>
kube-system    kube-controller-manager-chenyujie   1/1     Running   21 (9m49s ago)   3d22h   192.168.10.136   chenyujie    <none>           <none>
kube-system    kube-proxy-4wrd6                    1/1     Running   1 (45m ago)      3d19h   192.168.10.138   chenyujie3   <none>           <none>
kube-system    kube-proxy-8j8v8                    1/1     Running   1 (46m ago)      3d22h   192.168.10.136   chenyujie    <none>           <none>
kube-system    kube-proxy-fgndt                    1/1     Running   0                14m     192.168.10.137   chenyujie2   <none>           <none>
kube-system    kube-scheduler-chenyujie            1/1     Running   19 (9m45s ago)   3d22h   192.168.10.136   chenyujie    <none>           <none>
   

查看集群健康状态

[root@chenyujie ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
scheduler            Healthy   ok                              
controller-manager   Healthy   ok                              
etcd-0               Healthy   {"health":"true","reason":""}   
至此我们的 K8s 环境就搭建好了!!!
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐