K8S集群分类:K8S集群分为一主多从和多主多从两种,为了增加可靠性,现网使用多主多从模式,本文以一主两从搭建K8S集群。

步骤一:安装服务器

准备3台服务器,要求版本在centos7.5以上,且配置网卡,保证3台机器可以互通。

Master:172.21.10.10    Node1:172.21.10.11    Node2:172.21.10.12

hostnamectl --static set-hostname  k8s-master
hostnamectl --static set-hostname  k8s-node01
hostnamectl --static set-hostname  k8s-node02

[root@op-k8s-master ~]# cat /etc/redhat-release
CentOS Linux release 7.9.2009 (Core)
[root@k8s-master ~]# ip add | grep inet
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
    inet 172.21.10.10/24 brd 172.21.10.255 scope global eth0
    inet6 fe80::5054:ff:fea9:e347/64 scope link

步骤二:服务器基础配置

1. 域名配置:

127.0.0.1 VM-10-10-centos VM-10-10-centos
127.0.0.1 localhost.localdomain localhost
127.0.0.1 localhost4.localdomain4 localhost4

::1 VM-10-10-centos VM-10-10-centos
::1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6

172.21.10.10 op-k8s-master01
172.21.10.11 op-k8s-node01
172.21.10.12 op-k8s-node02
[root@k8s-master ~]# ping op-k8s-node01
PING op-k8s-node01 (172.21.10.11) 56(84) bytes of data.
64 bytes from op-k8s-node01 (172.21.10.11): icmp_seq=1 ttl=64 time=0.207 ms
64 bytes from op-k8s-node01 (172.21.10.11): icmp_seq=2 ttl=64 time=0.202 ms

2. 时间同步:保证3台机器时间同步

调整系统 TimeZone
timedatectl set-timezone Asia/Shanghai

将当前的 UTC 时间写入硬件时钟
timedatectl set-local-rtc 0

重启依赖于系统时间的服务
systemctl restart rsyslog 
systemctl restart crond

3. 禁用iptables和firewalld(Kubernetes和Docker在运行时会产生大量iptables规则,为了不让系统规则和它们混淆,直接关闭系统的规则)

[root@op-k8s-master ~]# systemctl stop firewalld
[root@op-k8s-master ~]# systemctl disable firewalld
[root@op-k8s-master ~]# systemctl stop iptables
[root@op-k8s-master ~]# systemctl disable iptables

4. 禁用SELINUX

[root@op-k8s-master ~]# more /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled

5. 禁用SWAP分区

[root@op-k8s-master ~]# more /etc/fstab
# /etc/fstab
# Created by anaconda on Sun Jul 11 15:11:03 2021
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=66d8c51a-8b6f-4e76-b687-53eaaae260b3 /boot                   xfs     defaults        0 0
#/dev/mapper/centos-swap swap                    swap    defaults        0 0
重启后生效:
[root@op-k8s-master ~]# free -m
              total        used        free      shared  buff/cache   available
Mem:           3777         717         191          57        2868        2730
Swap:             0           0           0

6. 修改Linux内核参数

6.1 添加网桥过滤和地址转发

cat > /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
# 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它
vm.swappiness=0 
# 不检查物理内存是否够用
vm.overcommit_memory=1 
# 开启 OOM
vm.panic_on_oom=0 
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF

6.2 重新加载配置:

[root@op-k8s-master ~]# sysctl -p

6.3 加载网桥过滤模块:

[root@op-k8s-master ~]# modprobe br_netfilter

6.4 查看网桥过滤模块是否添加成功:

[root@op-k8s-master ~]# lsmod |grep br_net
br_netfilter           22256  0
bridge                151336  1 br_netfilter

7. 配置IPVS功能

在kubernetes中service有两种代理模型,一种是基于iptables的,一种是基于ipvs的。两者比较的话,ipvs的性能明显要高一些,但是如果要使用它,需要手动载入ipvs模块。

7.1 安装ipset ipvsadmin

yum install ipset ipvsadm -y

7.2 添加需要加载的模块写入脚本文件

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack br_netfilter"
for kernel_module in \${ipvs_modules}; do
  /sbin/modinfo -F filename \${kernel_module} > /dev/null 2>&1
  if [ $? -eq 0 ]; then
    /sbin/modprobe \${kernel_module}
  fi
done
EOF
    
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs

7.3 为脚本文件添加执行权限

[root@op-k8s-master ~]# chmod +x /etc/sysconfig/modules/ipvs.modules

7.4 执行脚本

[root@op-k8s-master ~]# /bin/bash /etc/sysconfig/modules/ipvs.modules

7.5 查看对应的模块是否加载成功

[root@k8s-master ~]# lsmod |grep -e ip_vs -e nf_conntrack 
ip_vs_ftp              13079  0 
nf_nat                 26583  1 ip_vs_ftp
ip_vs_sed              12519  0 
ip_vs_nq               12516  0 
ip_vs_sh               12688  0 
ip_vs_dh               12688  0 
ip_vs_lblcr            12922  0 
ip_vs_lblc             12819  0 
ip_vs_wrr              12697  0 
ip_vs_rr               12600  0 
ip_vs_wlc              12519  0 
ip_vs_lc               12516  0 
ip_vs                 145458  22 ip_vs_dh,ip_vs_lc,ip_vs_nq,ip_vs_rr,ip_vs_sh,ip_vs_ftp,ip_vs_sed,ip_vs_wlc,ip_vs_wrr,ip_vs_lblcr,ip_vs_lblc
nf_conntrack          139264  2 ip_vs,nf_nat
libcrc32c              12644  3 ip_vs,nf_nat,nf_conntrack

8. 重启服务器

步骤三:Docker安装

1. 切换镜像源

[root@op-k8s-master ~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O/etc/yum.repos.d/docker-ce.repo

2. 查看镜像源中支持的docker版本(如果要看所有版本可以加参数--showduplicates,否则只显示最新版本)

[root@op-k8s-master ~]# yum list docker-ce
Loaded plugins: fastestmirror, product-id, search-disabled-repos, subscription-manager

This system is not registered with an entitlement server. You can use subscription-manager to register.

Loading mirror speeds from cached hostfile
 * base: ftp.sjtu.edu.cn
 * extras: ftp.sjtu.edu.cn
 * updates: mirror.lzu.edu.cn
Installed Packages
docker-ce.x86_64                                                   3:20.10.11-3.el7                                                   @docker-ce-st

3. 安装docker-ce

yum install docker-ce.x86_64 -y

4. 添加配置文件(Docker在默认情况下使用的Cgroup Driver为cgroupfs,而kubernetes推荐使用systemd来代替cgroupfs),同时添加仓库。

mkdir /etc/docker
cat > /etc/docker/daemon.json <<-'EOF'
{
 "exec-opts": ["native.cgroupdriver=systemd"],
 "insecure-registries": ["172.21.10.10:5000","http://harbor.steponeai.com"],

"registry-mirrors":
["https://docker.mirrors.ustc.edu.cn/",
                     "https://hub-mirror.c.163.com",
                     "https://registry.docker-cn.com",
                     "https://kn0t2bca.mirror.aliyuncs.com"
],
                     "bip": "192.168.0.1/24"

}
EOF

5. 启动docker

[root@op-k8s-master ~]# systemctl start docker
[root@op-k8s-master ~]# systemctl enable docker

6. 查看docker版本

[root@op-k8s-master ~]# docker version
Client: Docker Engine - Community
 Version:           20.10.11
 API version:       1.41
 Go version:        go1.16.9
 Git commit:        dea9396
 Built:             Thu Nov 18 00:38:53 2021
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.11
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.16.9
  Git commit:       847da18
  Built:            Thu Nov 18 00:37:17 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.12
  GitCommit:        7b11cfaabd73bb80907dd23182b9347b4245eb5d
 runc:
  Version:          1.0.2
  GitCommit:        v1.0.2-0-g52b36a2
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

步骤四:K8S组件安装

1. YUM源更新由于Kubernetes的镜像源在国外,速度比较慢,这里切换成国内的镜像源,编辑/etc/yum.repos.d/kubernetes.repo,添加下面的配置:

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

2. 安装kubeadm,kubelet和kubectl3个组件

[root@op-k8s-master ~]# yum list | grep kube
cri-tools.x86_64                            1.19.0-0                   @kubernetes
kubeadm.x86_64                              1.22.4-0                   @kubernetes
kubectl.x86_64                              1.22.4-0                   @kubernetes
kubelet.x86_64                              1.22.4-0                   @kubernetes
[root@op-k8s-master ~]# yum install kubeadm.x86_64 kubectl.x86_64 kubelet.x86_64 -y

3.配置kubelet的cgroup

[root@op-k8s-master ~]# more /etc/sysconfig/kubelet
KUBELET_CGROUP_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"

4.设置kubelet开机自启

[root@op-k8s-master ~]# systemctl enable kubelet

步骤五:K8S集群部署

1. 查看部署集群需要的镜像包,由于使用kubeadm部署集群,可用如下命令查看:

[root@op-k8s-master sysconfig]# kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.22.4
k8s.gcr.io/kube-controller-manager:v1.22.4
k8s.gcr.io/kube-scheduler:v1.22.4
k8s.gcr.io/kube-proxy:v1.22.4
k8s.gcr.io/pause:3.5
k8s.gcr.io/etcd:3.5.0-0
k8s.gcr.io/coredns/coredns:v1.8.4

2. 下载镜像包,由于kubeadm默认从谷歌官方下载,网络可能不通,可以先从阿里云下载,再把镜像名修改为kubeadm可识别的谷歌官方名字,以其中一个镜像为例配置如下:(也可通过循环一次操作)

拉取docker镜像
说明:其实不拉取也可以,因为初始化的时候,它会自动拉取,但是自动拉取用的是k8s官网的源地址,所以一般我们都会拉取失败,这里我们自己手动拉取aliyun的镜像

请注意:拉取的docker镜像的版本必须要和kubelet、kubectl的版本保持一直

每个节点都操作

这里我直接弄了两个脚本,运行拉取,还要修改镜像的tag;至于为什么要修改为这个版本,这是我后面初始化,看到了报错信息,必须有这个版本的镜像;

这里我们拉去的虽然是aliyun的镜像,但是还是要将tag改为kobeadm能识别到的镜像名字;否则kobeadm初始化的时候,由于镜像名字不对,会识别不到;

# cat dockerPull.sh 
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.22.4
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.22.4
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.22.4
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.22.4
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.8.4
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.0-0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.5


# cat dockerTag.sh 
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.22.4 k8s.gcr.io/kube-controller-manager:v1.22.4
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.22.4 k8s.gcr.io/kube-proxy:v1.22.4
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.22.4 k8s.gcr.io/kube-apiserver:v1.22.4
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.22.4 k8s.gcr.io/kube-scheduler:v1.22.4
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.8.4 k8s.gcr.io/coredns/coredns:v1.8.4
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.0-0 k8s.gcr.io/etcd:3.5.0-0
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.5 k8s.gcr.io/pause:3.5

3. 查看本地docker镜像,保证kubeadm需要的镜像都已完成:

[root@op-k8s-master ~]# docker images
REPOSITORY                           TAG       IMAGE ID       CREATED        SIZE
k8s.gcr.io/kube-apiserver            v1.22.4   8a5cc299272d   34 hours ago   128MB
k8s.gcr.io/kube-controller-manager   v1.22.4   0ce02f92d3e4   34 hours ago   122MB
k8s.gcr.io/kube-scheduler            v1.22.4   721ba97f54a6   34 hours ago   52.7MB
k8s.gcr.io/kube-proxy                v1.22.4   edeff87e4802   34 hours ago   104MB
k8s.gcr.io/etcd                      3.5.0-0   004811815584   5 months ago   295MB
k8s.gcr.io/coredns/coredns           v1.8.4    8d147537fb7d   5 months ago   47.6MB
k8s.gcr.io/pause                     3.5       ed210e3e4a5b   8 months ago   683kB

4. 在master节点进行集群部署

[root@k8s-master ~]# kubeadm init --kubernetes-version=v1.22.4 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --apiserver-advertise-address=172.21.10.10 --image-repository registry.aliyuncs.com/google_containers
[init] Using Kubernetes version: v1.22.4
[preflight] Running pre-flight checks
        [WARNING Hostname]: hostname "k8s-master" could not be reached
        [WARNING Hostname]: hostname "k8s-master": lookup k8s-master on 183.60.83.19:53: no such host
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.21.10.10]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [172.21.10.10 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [172.21.10.10 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 7.002551 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: t89hze.djlcmhfq98awq35j
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.21.10.10:6443 --token t89hze.djlcmhfq98awq35j \
        --discovery-token-ca-cert-hash sha256:0d231b7884c25bd2ca002cb1f58ad2c47ff53039b8b5f7188a61d5e25a0ef47c 

5. 部署成功后,按指引先执行以下命令:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

6.Node节点加入集群:

[root@op-k8s-node1 ~]# kubeadm join 172.21.10.10:6443 --token t89hze.djlcmhfq98awq35j --discovery-token-ca-cert-hash sha256:0d231b7884c25bd2ca002cb1f58ad2c47ff53039b8b5f7188a61d5e25a0ef47c 
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...

[root@k8s-master ~]# kubectl get nodes
NAME         STATUS     ROLES                  AGE     VERSION
k8s-master   NotReady   control-plane,master   87m     v1.22.4
k8s-node01   NotReady   <none>                 2m24s   v1.22.4
k8s-node02   NotReady   <none>                 8s      v1.22.4

可以看到,当前只存在1个master节点,并且这个节点的状态是 NotReady。

查看这个节点(Node)对象的详细信息、状态和事件(Event):

kubectl describe node k8s-master

通过 kubectl describe 指令的输出,我们可以看到 NodeNotReady 的原因在于,我们尚未部署任何网络插件,kube-proxy等组件还处于starting状态。 另外,我们还可以通过 kubectl 检查这个节点上各个系统 Pod 的状态,其中,kube-system 是 Kubernetes 项目预留的系统 Pod 的工作空间(Namepsace,注意它并不是 Linux Namespace,它只是 Kubernetes 划分不同工作空间的单位):

查看pod状态

[root@k8s-master ~]# kubectl get pod -n kube-system -o wide

可以看到,CoreDNS依赖于网络的 Pod 都处于 Pending 状态,即调度失败。这当然是符合预期的:因为这个 Master 节点的网络尚未就绪。

集群初始化如果遇到问题,可以使用kubeadm reset命令进行清理然后重新执行初始化。

[root@k8s-master ~]# kubeadm reset
[root@k8s-master ~]# ifconfig cni0 down && ip link delete cni0
[root@k8s-master ~]# ifconfig flannel.1 down && ip link delete flannel.1
[root@k8s-master ~]# rm -rf /var/lib/cni/   $HOME/.kube/config
swapoff -a
kubeadm reset
systemctl daemon-reload
systemctl restart kubelet
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

骤六:配置网络插件

kubernetes支持多种网络插件,比如flannel、calico、canal等等,这里选择使用flannel。(以下操作只在master节点)

1. 获取flannel文件

[root@op-k8s-master ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
--2021-11-19 10:13:57--  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.111.133, 185.199.110.133, 185.199.109.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.111.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 5177 (5.1K) [text/plain]
Saving to: ‘kube-flannel.yml’

100%[=============================================================================================================>] 5,177       --.-K/s   in 0.003s

2021-11-19 10:13:58 (1.71 MB/s) - ‘kube-flannel.yml’ saved [5177/5177]

2.使用配置文件启动fannel

[root@op-k8s-master ~]# kubectl apply -f kube-flannel.yml
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

删除flannel:
kubectl delete -f kube-flannel.yml
如果使用阿里云的ECS,node节点有多个网卡,所以需要在kube-flannel.yml文件中使用--iface参数指定集群直接内网网卡的名称,否则可能会出现dns无法解析。修改kube-flannel.yml文件,在flanneld启动参数加上 --iface=eth0

       - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        - --iface=eth0


部署完成后,我们可以通过 kubectl get 重新检查 Pod 的状态,可以看到,所有的系统 Pod 都成功启动了

再次查看master节点状态已经为ready状态:

至此,Kubernetes 的 Master 节点就部署完成了。如果你只需要一个单节点的 Kubernetes,现在你就可以使用了。不过,在默认情况下,Kubernetes 的 Master 节点是不能运行用户 Pod 的。

3.完成以后集群状态变为ready:

[root@k8s-master ~]# kubectl get nodes
NAME         STATUS   ROLES                  AGE    VERSION
k8s-master   Ready    control-plane,master   165m   v1.22.4
k8s-node01   Ready    <none>                 80m    v1.22.4
k8s-node02   Ready    <none>                 78m    v1.22.4

步骤七:集群测试

通过在kubernetes集群中部署一个Nginx程序,测试集群是否正常工作。

1. 部署Nginx

[root@op-k8s-master ~]# kubectl create deployment nginx --image=nginx:1.20-alpine
deployment.apps/nginx created

2. 暴露端口

[root@op-k8s-master ~]# kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed

3. 查看POD和Service状态:

[root@op-k8s-master ~]# kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
nginx-65c4bffcb6-5d6hr   1/1     Running   0          93s
[root@op-k8s-master ~]# kubectl get service
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        71m
nginx        NodePort    10.96.94.199   <none>        80:30678/TCP   56s
  1. 测试Nginx服务:
  2. IP+30678

步骤七:证书续期

查看集群过期时间

kubeadm certs check-expiration
或者
cd /etc/kubernetes/pki/
for i in $(ls *.crt); do echo "===== $i ====="; openssl x509 -in $i -text -noout | grep -A 3 'Validity' ; done

对证书进行续期

kubeadm  certs renew all
systemctl restart kubelet

更新kubeconfig文件

[root@k8s-master ~]# cp -a /etc/kubernetes/admin.conf .kube/config 
cp: overwrite ‘.kube/config’? y

步骤八:部署dashboard

1.下载yaml文件

官网下载地址

目前最新版本为v2.4.0

wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml

vim recommended.yaml
----
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001
  type: NodePort      
  selector:
    k8s-app: kubernetes-dashboard
----


kubectl apply -f recommended.yaml

kubectl get pods -n kubernetes-dashboard
kubectl get pods,svc -n kubernetes-dashboard

创建用户

kubectl create serviceaccount dashboard-admin -n kube-system

用户授权

kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin

获取用户Token

kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

登陆

IP+30001

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐