1、初始化安装k8s集群的实验环境

1.1 机器规划

k8s集群角色机器ip主机名安装组件
控制节点192.168.31.14master1apiserver、controller-manager、scheduler、etcd、kube-proxy、docker、calico
工作节点192.168.31.15node1kubelet、kube-proxy、docker、calico、coredns
工作节点192.168.31.16node2kubelet、kube-proxy、docker、calico、coredns

1.2 修改机器为静态IP

以master1机器为例(不同机器网卡不同,配置文件也会不同,根据自己的机器网卡修改,笔者是基于mac的virtualBox虚拟机)

vi /etc/sysconfig/network-scripts/ifcfg-enp0s3

TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
IPADDR=192.168.31.15
NETMASK=255.255.255.0
GATEWAY=192.168.31.1
DNS1=192.168.31.1
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=enp0s3
UUID=50976f9d-af21-4c44-902e-d627b07e27bb
DEVICE=enp0s3
ONBOOT=yes

注:/etc/sysconfig/network-scripts/ifcfg-enp0s3文件里的配置说明:

NAME=enp0s3   #网卡名字,跟DEVICE名字保持一致即可
DEVICE=enp0s3   #网卡设备名
BOOTPROTO=static   #static表示静态ip地址
ONBOOT=yes    #开机自启动网络,必须是yes
IPADDR=192.168.31.14   #ip地址,根据网关的网段来设置
GATEWAY=192.168.31.1   #网关,查看自己电脑当前连接的网络网关地址

修改配置文件之后需要重启网络服务才能使配置生效,重启网络服务命令如下:service network restart

1.3 配置主机之间免密登录

修改每台机器的/etc/hosts文件,增加如下三行:

192.168.31.14 master1  
192.168.31.15 node1  
192.168.31.16 node2

每台机器上分别执行ssh-keygen,生成秘钥。

[root@master1 ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:zmqLbcYpUtbWtaEmqdEUluTVWYDvPmLh69WcC9XOa9M root@master1
The key's randomart image is:
+---[RSA 2048]----+
|     ....o.+.    |
|     .+.. o      |
|     ... .       |
|      .   +  .   |
|     + oS+ o. .  |
|    + *o= o+ +   |
|   o = =ooo + o. |
|  . oo*.+.o. .o.E|
|   ..=++oo ..... |
+----[SHA256]-----+

接着分别在三个机器上执行秘钥拷贝(将三个节点的秘钥都拷贝到master1中)

ssh-copy-id master1

拷贝完成后,在master1中检查是否拷贝成功(master1执行)

[root@master1 ~]# cat ~/.ssh/authorized_keys 
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCt6AVWshkjNgxziEdBFl9PsfMauFcCMrY+sMk0SUIALuS6SGcL1/KdW344rb7BOHjGcNG0HHaB1igt0CIkEyBhBA7zXMu2reEatAyB3RLrg5Y8EN17gfTzPbdQL0FODG1JvYnci7hGItUJKW+nCFUwNCTCPKclEYZD+L2ZomnRipOfXx+NBz3qLj11J5jcihmDziOVqnHI51nW/9aM6oIYxBBix7EADFd9g72t4R/7/T2EG9Ec/60BxjAVtTYii80KsYVnSfFra8yrMHSvr9wYXyOlC7VASCEbH4BNVRc6nOpYKaeWlpt/dHX4vg1c8uBvNfoG1Ph4zTrAQ162KLef root@master1
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC1EdEgecchcARP1MJ/Wh+jzRk5rUPJ4krowT5mpo9ncye4+IrUKaC2ZMnwnQs733muzlvFjOsJyCQBl/wDtB6vMtmYGP5DsdeM/k6vrdRgIX3nIq0URbsOYiCVBxqE5PFrJHp2YXo8Zsg7Rxg/8Fz4e7TO342+gnjp3vATEydfYKGeAiUjdtLyKti93D1NYqffbsj97Y3wchKwXYNdQ/rVID56zZOdmvJV8vAZNqebn0jR5ID0Ci147e+MxIQoOmmt3U+gVhNk4fjiuY4ljgF43xFJnuZHjqANVVglQwxZ8IcrNI5LnvhIS+3YV7xp7B3GC1MV3/YG/eG+Jng15lO3 root@node1
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDNxYNlP/ptrWoYprFFKVxE7J8Rxsb22rQ8HOtHxGhwwuBgmRh86RmTUE7rP85Y/xbnyvea7aThKevIn20vLlgmiYqFTiWZTRNUCU6MHPt2Eo5tfoGdNDx519TAD3R8mZZSHcE9uI8ZBCf2JsqWz4JFDG0recFKQMFvznMe6NVrZZUbXe7rPq5YHy/mabvDNrYSWRbpcna7nS2zD0ixG7QVjWeAIHbBtrSYMIT3hpgB9c5/8zZ2k+qa0HwkNkbvCuKS3dIPbfi+i6E5qktLJ2e7Lg+yKL2rueJ72UoUTfmAJ+m8pRJ7z96fIPhctH4Se2Zu5MMqxlgRgHSZ4h8d9lSf root@node2

秘钥分发(master1执行)

[root@master1 .ssh]# scp -r ~/.ssh/authorized_keys node1:~/.ssh/
[root@master1 .ssh]# scp -r ~/.ssh/authorized_keys node2:~/.ssh/

最后,三台机器免密登录校验即可

ssh master1
ssh node1
ssh node2

1.4 关闭交换分区swap

#临时关闭
[root@master1 ~]# swapoff -a
[root@node1 ~]#  swapoff -a
[root@node2~]#  swapoff -a

#永久关闭:注释swap挂载,给swap这行开头加一下注释
[root@master1 ~]# vim /etc/fstab   
#/dev/mapper/centos-swap swap      swap    defaults        0 0
#如果是克隆的虚拟机,需要删除UUID
[root@node1 ~]# vim /etc/fstab
#/dev/mapper/centos-swap swap      swap    defaults        0 0
#如果是克隆的虚拟机,需要删除UUID
[root@node2 ~]# vim /etc/fstab
#/dev/mapper/centos-swap swap      swap    defaults        0 0
#如果是克隆的虚拟机,需要删除UUID

为什么要关闭swap交换分区?

Swap是交换分区,如果机器内存不够,会使用swap分区,但是swap分区的性能较低,k8s设计的时候为了能提升性能,默认是不允许使用姜欢分区的。Kubeadm初始化的时候会检测swap是否关闭,如果没关闭,那就初始化失败。如果不想要关闭交换分区,安装k8s的时候可以指定–ignore-preflight-errors=Swap来解决。

1.5 修改机器内核参数

[root@master1 ~]# modprobe br_netfilter
[root@master1 ~]# echo "modprobe br_netfilter" >> /etc/profile
[root@master1 ~]# cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
[root@master1 ~]# sysctl -p /etc/sysctl.d/k8s.conf

[root@node1 ~]# modprobe br_netfilter
[root@node1 ~]# echo "modprobe br_netfilter" >> /etc/profile
[root@node1 ~]# cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
[root@node1 ~]# sysctl -p /etc/sysctl.d/k8s.conf

[root@node2 ~]# modprobe br_netfilter
[root@node2 ~]# echo "modprobe br_netfilter" >> /etc/profile
[root@node2 ~]# cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
[root@node2 ~]# sysctl -p /etc/sysctl.d/k8s.conf

1.6 关闭firewalld防火墙

[root@master1 ~]# systemctl stop firewalld ; systemctl disable firewalld
[root@node1 ~]# systemctl stop firewalld ; systemctl disable firewalld
[root@node2 ~]# systemctl stop firewalld ; systemctl disable firewalld

1.7 关闭selinux

[root@master1 ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
#修改selinux配置文件之后,重启机器,selinux配置才能永久生效
[root@node1 ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
#修改selinux配置文件之后,重启机器,selinux配置才能永久生效
[root@node2 ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
#修改selinux配置文件之后,重启机器,selinux配置才能永久生效

[root@master1 ~]# getenforce
Disabled
#显示Disabled说明selinux已经关闭
[root@node1 ~]# getenforce
Disabled
#显示Disabled说明selinux已经关闭
[root@node2~]# getenforce
Disabled
#显示Disabled说明selinux已经关闭

1.8 配置阿里云repo源

三台机器都要执行,以master1机器为例

1.8.1 yum源配置

#备份基础repo源
[root@master1 ~]# mkdir /root/repo.bak
[root@master1 ~]# cd /etc/yum.repos.d/
[root@master1]# mv * /root/repo.bak/

[root@master1 yum.repos.d]# vi CentOS-Base.repo 
# CentOS-Base.repo
#
# The mirror system uses the connecting IP address of the client and the
# update status of each mirror to pick mirrors that are updated to and
# geographically close to the client.  You should use this for CentOS updates
# unless you are manually picking other mirrors.
#
# If the mirrorlist= does not work for you, as a fall back you can try the 
# remarked out baseurl= line instead.
#
#
 
[base]
name=CentOS-$releasever - Base - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/os/$basearch/
        http://mirrors.aliyuncs.com/centos/$releasever/os/$basearch/
        http://mirrors.cloud.aliyuncs.com/centos/$releasever/os/$basearch/
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
 
#released updates 
[updates]
name=CentOS-$releasever - Updates - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/updates/$basearch/
        http://mirrors.aliyuncs.com/centos/$releasever/updates/$basearch/
        http://mirrors.cloud.aliyuncs.com/centos/$releasever/updates/$basearch/
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
 
#additional packages that may be useful
[extras]
name=CentOS-$releasever - Extras - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/extras/$basearch/
        http://mirrors.aliyuncs.com/centos/$releasever/extras/$basearch/
        http://mirrors.cloud.aliyuncs.com/centos/$releasever/extras/$basearch/
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
 
#additional packages that extend functionality of existing packages
[centosplus]
name=CentOS-$releasever - Plus - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/centosplus/$basearch/
        http://mirrors.aliyuncs.com/centos/$releasever/centosplus/$basearch/
        http://mirrors.cloud.aliyuncs.com/centos/$releasever/centosplus/$basearch/
gpgcheck=1
enabled=0
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
 
#contrib - packages by Centos Users
[contrib]
name=CentOS-$releasever - Contrib - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/contrib/$basearch/
        http://mirrors.aliyuncs.com/centos/$releasever/contrib/$basearch/
        http://mirrors.cloud.aliyuncs.com/centos/$releasever/contrib/$basearch/
gpgcheck=1
enabled=0
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7

1.8.2 yum源配置

[root@master1 yum.repos.d]# vi epel.repo
[epel]
name=Extra Packages for Enterprise Linux 7 - $basearch
#baseurl=http://download.fedoraproject.org/pub/epel/7/$basearch
metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch&infra=$infra&content=$contentdir
failovermethod=priority
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7

[epel-debuginfo]
name=Extra Packages for Enterprise Linux 7 - $basearch - Debug
#baseurl=http://download.fedoraproject.org/pub/epel/7/$basearch/debug
metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-7&arch=$basearch&infra=$infra&content=$contentdir
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=1

[epel-source]
name=Extra Packages for Enterprise Linux 7 - $basearch - Source
#baseurl=http://download.fedoraproject.org/pub/epel/7/SRPMS
metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-source-7&arch=$basearch&infra=$infra&content=$contentdir
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=1

1.8.3 配置安装k8s组件需要的阿里云repo源

[root@master1 yum.repos.d]# vi kubernetes.repo 
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0

1.9 配置时间同步

三台机器都要执行,以master1机器为例

#安装ntpdate命令
[root@master1 ~]# yum install ntpdate -y
#跟网络时间做同步
[root@master1 ~]# ntpdate cn.pool.ntp.org
#把时间同步做成计划任务
[root@master1 ~]# crontab -e
* */1 * * * /usr/sbin/ntpdate   cn.pool.ntp.org
#重启crond服务
[root@master1 ~]#service crond restart

1.10 开启ipvs

三台机器都要执行,以master1机器为例

[root@master1 ~]# cd /etc/sysconfig/modules/
[root@master1 modules]# vi ipvs.modules 
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in ${ipvs_modules}; do
 /sbin/modinfo -F filename ${kernel_module} > /dev/null 2>&1
 if [ 0 -eq 0 ]; then
 /sbin/modprobe ${kernel_module}
 fi
done
[root@master1]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs
ip_vs_ftp              13079  0 
nf_nat                 26583  3 ip_vs_ftp,nf_nat_ipv4,nf_nat_masquerade_ipv4
ip_vs_sed              12519  0 
ip_vs_nq               12516  0 
ip_vs_sh               12688  0 
ip_vs_dh               12688  0 
ip_vs_lblcr            12922  0 
ip_vs_lblc             12819  0 
ip_vs_wrr              12697  0 
ip_vs_rr               12600  0 
ip_vs_wlc              12519  0 
ip_vs_lc               12516  0 
ip_vs                 145458  22 ip_vs_dh,ip_vs_lc,ip_vs_nq,ip_vs_rr,ip_vs_sh,ip_vs_ftp,ip_vs_sed,ip_vs_wlc,ip_vs_wrr,ip_vs_lblcr,ip_vs_lblc
nf_conntrack          139264  7 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4
libcrc32c              12644  4 xfs,ip_vs,nf_nat,nf_conntrack

ipvs是什么?

ipvs (IP Virtual Server) 实现了传输层负载均衡,也就是我们常说的4层LAN交换,作为 Linux 内核的一部分。ipvs运行在主机上,在真实服务器集群前充当负载均衡器。ipvs可以将基于TCP和UDP的服务请求转发到真实服务器上,并使真实服务器的服务在单个 IP 地址上显示为虚拟服务。

ipvs和iptable对比分析

kube-proxy支持 iptables 和 ipvs 两种模式, 在kubernetes v1.8 中引入了 ipvs 模式,在 v1.9 中处于 beta 阶段,在 v1.11 中已经正式可用了。iptables 模式在 v1.1 中就添加支持了,从 v1.2 版本开始 iptables 就是 kube-proxy 默认的操作模式,ipvs 和 iptables 都是基于netfilter的,但是ipvs采用的是hash表,因此当service数量达到一定规模时,hash查表的速度优势就会显现出来,从而提高service的服务性能。那么 ipvs 模式和 iptables 模式之间有哪些差异呢?

  • ipvs 为大型集群提供了更好的可扩展性和性能
  • ipvs 支持比 iptables 更复杂的复制均衡算法(最小负载、最少连接、加权等等)
  • ipvs 支持服务器健康检查和连接重试等功能

1.11 安装基础软件包

三台机器都执行

yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel  python-devel epel-release openssh-server socat  ipvsadm conntrack ntpdate telnet ipvsadm --nogpgcheck

2、安装Docker

三台机器都要执行,以master1机器为例

2.1 安装docker-ce

[root@master1 ~]# yum install docker-ce-20.10.6 docker-ce-cli-20.10.6 containerd.io  -y
[root@master1 ~]# systemctl start docker && systemctl enable docker.service

2.2 配置docker镜像加速器和驱动

[root@master1 ~]# vim  /etc/docker/daemon.json 
{
 "registry-mirrors":["https://rsbud4vc.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com","http://qtid6917.mirror.aliyuncs.com", "https://rncxm540.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
} 

#修改docker文件驱动为systemd,默认为cgroupfs,kubelet默认使用systemd,两者必须一致才可以。
[root@master1 ~]# systemctl daemon-reload  && systemctl restart docker
[root@master1 ~]# systemctl status docker

3、安装初始化k8s需要的软件包

三台机器都要执行,以master1机器为例

[root@master1 ~]# yum install -y kubelet-1.20.6 kubeadm-1.20.6 kubectl-1.20.6
[root@master1 ~]# systemctl enable kubelet && systemctl start kubelet
[root@master1 ~]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: activating (auto-restart) (Result: exit-code) since 日 2024-03-10 11:16:26 CST; 9s ago
     Docs: https://kubernetes.io/docs/
  Process: 23548 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=255)
 Main PID: 23548 (code=exited, status=255)

3月 10 11:16:26 master1 kubelet[23548]: /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/uti...wait.go:90
3月 10 11:16:26 master1 kubelet[23548]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x4a86e78, 0x12a05f200)
3月 10 11:16:26 master1 kubelet[23548]: /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/uti...o:81 +0x4f
3月 10 11:16:26 master1 kubelet[23548]: created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
3月 10 11:16:26 master1 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
3月 10 11:16:26 master1 kubelet[23548]: /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/...o:58 +0x8a
3月 10 11:16:26 master1 systemd[1]: Unit kubelet.service entered failed state.
3月 10 11:16:26 master1 systemd[1]: kubelet.service failed.
Hint: Some lines were ellipsized, use -l to show in full.

上面可以看到kubelet状态不是running状态,这个是正常的,等k8s组件起来这个kubelet就正常了。

注:每个软件包的作用

Kubeadm: kubeadm是一个工具,用来初始化k8s集群的

kubelet: 安装在集群所有节点上,用于启动Pod的

kubectl: 通过kubectl可以部署和管理应用,查看各种资源,创建、删除和更新各种组件

4、kubeadm初始化k8s集群

把初始化k8s集群需要的离线镜像包上传到master1、node1、node2机器上,手动解压:

[root@master1 ~]# docker load -i k8simage-1-20-6.tar.gz
[root@node1 ~]# docker load -i k8simage-1-20-6.tar.gz
[root@node2 ~]# docker load -i k8simage-1-20-6.tar.gz

使用kubeadm初始化k8s集群

[root@master1 ~]# kubeadm init --kubernetes-version=1.20.6  --apiserver-advertise-address=192.168.31.14  --image-repository registry.aliyuncs.com/google_containers  --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=SystemVerification

注:–image-repository registry.aliyuncs.com/google_containers:手动指定仓库地址为registry.aliyuncs.com/google_containers。

kubeadm默认从k8s.grc.io拉取镜像,但是k8s.gcr.io访问不到,所以需要指定从registry.aliyuncs.com/google_containers仓库拉取镜像。

显示如下,说明安装完成:

[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.31.14:6443 --token iyoy38.rskkm9shp2ek00n5 \
    --discovery-token-ca-cert-hash sha256:d855777f5e5fad28634bc967321115b624c9aedfe7115ec57adb65175bab403c

上面的命令:

kubeadm join 192.168.31.14:6443 --token iyoy38.rskkm9shp2ek00n5
–discovery-token-ca-cert-hash sha256:d855777f5e5fad28634bc967321115b624c9aedfe7115ec57adb65175bab403c

是把node节点加入集群,需要保存下来。

如果忘记保存了,也可以在master1机器上查看加入节点的命令

kubeadm token create --print-join-command

配置kubectl的配置文件config,相当于对kubectl进行授权,这样kubectl命令可以使用这个证书对k8s集群进行管理。

[root@master1 ~]# mkdir -p $HOME/.kube
[root@master1 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master1 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@master1 ~]# kubectl get nodes
NAME      STATUS     ROLES                  AGE    VERSION
master1   NotReady   control-plane,master   9m2s   v1.20.6
此时集群状态还是NotReady状态,因为没有安装网络插件。

5、添加工作节点

[root@node1 ~]# kubeadm join 192.168.31.14:6443 --token iyoy38.rskkm9shp2ek00n5 \
>     --discovery-token-ca-cert-hash sha256:d855777f5e5fad28634bc967321115b624c9aedfe7115ec57adb65175bab403c
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 3.10.0-1160.el7.x86_64
DOCKER_VERSION: 20.10.6
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.6. Latest validated version: 19.03
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found.\n", err: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

# 这个错误说是docker版本跟内核版本不匹配,忽略验证即可成功将node节点加入到集群中
# node1,node2机器上分别执行:
kubeadm join 192.168.31.14:6443 --token iyoy38.rskkm9shp2ek00n5 \
    --discovery-token-ca-cert-hash sha256:d855777f5e5fad28634bc967321115b624c9aedfe7115ec57adb65175bab403c \
    --ignore-preflight-errors=SystemVerification

master1机器上查看集群中节点:

[root@master1 yum.repos.d]# kubectl get nodes
NAME      STATUS     ROLES                  AGE   VERSION
master1   NotReady   control-plane,master   33m   v1.20.6
node1     NotReady   <none>                 27s   v1.20.6
node2     NotReady   <none>                 5s    v1.20.6

注意:上面状态都是NotReady状态,是因为没有安装网络插件

6、安装kubernetes网络组件Calico

注:calico在线下载配置文件地址:https://docs.projectcalico.org/manifests/calico.yaml

在master1机器上执行

[root@master1 software]# kubectl apply -f calico.yaml 
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
poddisruptionbudget.policy/calico-kube-controllers created

查看calico pod已经正常启动,并且可以看到nodes节点已经是Ready状态。

[root@master1 software]# kubectl get pod -n kube-system -o wide
NAME                                       READY   STATUS    RESTARTS   AGE   IP               NODE      NOMINATED NODE   READINESS GATES
calico-kube-controllers-6949477b58-xk4xw   1/1     Running   0          84s   10.244.166.129   node1     <none>           <none>
calico-node-lcp6t                          1/1     Running   0          84s   192.168.31.15    node1     <none>           <none>
calico-node-vd4vj                          1/1     Running   0          84s   192.168.31.16    node2     <none>           <none>
calico-node-xslkg                          1/1     Running   0          84s   192.168.31.14    master1   <none>           <none>
coredns-7f89b7bc75-lmw55                   1/1     Running   0          47m   10.244.104.2     node2     <none>           <none>
coredns-7f89b7bc75-pt7sj                   1/1     Running   0          47m   10.244.104.1     node2     <none>           <none>
etcd-master1                               1/1     Running   0          47m   192.168.31.14    master1   <none>           <none>
kube-apiserver-master1                     1/1     Running   0          47m   192.168.31.14    master1   <none>           <none>
kube-controller-manager-master1            1/1     Running   0          47m   192.168.31.14    master1   <none>           <none>
kube-proxy-9c6cp                           1/1     Running   0          15m   192.168.31.15    node1     <none>           <none>
kube-proxy-fcsbs                           1/1     Running   0          14m   192.168.31.16    node2     <none>           <none>
kube-proxy-xzsgb                           1/1     Running   0          47m   192.168.31.14    master1   <none>           <none>
kube-scheduler-master1                     1/1     Running   0          47m   192.168.31.14    master1   <none>           <none>
[root@master1 software]# kubectl get nodes
NAME      STATUS   ROLES                  AGE   VERSION
master1   Ready    control-plane,master   46m   v1.20.6
node1     Ready    <none>                 14m   v1.20.6
node2     Ready    <none>                 13m   v1.20.6

7、测试k8s集群

将准备好的tomcat.tar.gz上传到node1、node2机器上,并且docker load解压出镜像。

[root@node1 software]# docker load -i tomcat.tar.gz 
f1b5933fe4b5: Loading layer [==================================================>]  5.796MB/5.796MB
9b9b7f3d56a0: Loading layer [==================================================>]  3.584kB/3.584kB
edd61588d126: Loading layer [==================================================>]  80.28MB/80.28MB
48988bb7b861: Loading layer [==================================================>]   2.56kB/2.56kB
8e0feedfd296: Loading layer [==================================================>]  24.06MB/24.06MB
aac21c2169ae: Loading layer [==================================================>]  2.048kB/2.048kB
Loaded image: tomcat:8.5-jre8-alpine

[root@node2 software]# docker load -i tomcat.tar.gz 
f1b5933fe4b5: Loading layer [==================================================>]  5.796MB/5.796MB
9b9b7f3d56a0: Loading layer [==================================================>]  3.584kB/3.584kB
edd61588d126: Loading layer [==================================================>]  80.28MB/80.28MB
48988bb7b861: Loading layer [==================================================>]   2.56kB/2.56kB
8e0feedfd296: Loading layer [==================================================>]  24.06MB/24.06MB
aac21c2169ae: Loading layer [==================================================>]  2.048kB/2.048kB
Loaded image: tomcat:8.5-jre8-alpine

在master1机器上执行下面操作

[root@master1 software]# vi tomcat.yaml 
apiVersion: v1  #pod属于k8s核心组v1
kind: Pod  #创建的是一个Pod资源
metadata:  #元数据
  name: demo-pod  #pod名字
  namespace: default  #pod所属的名称空间
  labels:
    app: myapp  #pod具有的标签
    env: dev      #pod具有的标签
spec:
  containers:      #定义一个容器,容器是对象列表,下面可以有多个name
  - name:  tomcat-pod-java  #容器的名字
    ports:
    - containerPort: 8080
    image: tomcat:8.5-jre8-alpine   #容器使用的镜像
    imagePullPolicy: IfNotPresent
    
[root@master1 software]# kubectl apply -f tomcat.yaml 
pod/demo-pod created
[root@master1 software]# kubectl get pods
NAME       READY   STATUS    RESTARTS   AGE
demo-pod   1/1     Running   0          10s

[root@master1 software]# vi tomcat-service.yaml 
apiVersion: v1
kind: Service
metadata:
  name: tomcat
spec:
  type: NodePort
  ports:
    - port: 8080
      nodePort: 30080
  selector:
    app: myapp
    env: dev

[root@master1 software]# kubectl apply -f tomcat-service.yaml 
service/tomcat created
[root@master1 software]# kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP          59m
tomcat       NodePort    10.98.55.166   <none>        8080:30080/TCP   18s
[root@master1 software]# kubectl get pods -o wide
NAME       READY   STATUS    RESTARTS   AGE   IP             NODE    NOMINATED NODE   READINESS GATES
demo-pod   1/1     Running   0          63s   10.244.104.3   node2   <none>           <none>

浏览器上访问:http://192.168.31.16:30080
在这里插入图片描述

将准备好的busybox-1-28.tar.gz上传到master1机器上,并且docker load解压出镜像,测试coredns是否正常。

[root@master1 software]# docker load -i busybox-1-28.tar.gz 
432b65032b94: Loading layer [==================================================>]   1.36MB/1.36MB
Loaded image: busybox:1.28

[root@master1 software]# kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh
If you don't see a command prompt, try pressing enter.
/ # nslookup kubernetes.default.svc.cluster.local
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes.default.svc.cluster.local
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local

10.96.0.10 就是我们coreDNS的clusterIP,说明coreDNS配置好了。

解析内部Service的名称,是通过coreDNS去解析的。

8、安装k8s可视化UI界面dashboard

8.1 安装dashboard

将准备好的tomcat.tar.gz上传到node1、node2机器上,并且docker load解压出镜像。

[root@node1 software]# docker load -i dashboard_2_0_0.tar.gz 
954115f32d73: Loading layer [==================================================>]  91.22MB/91.22MB
Loaded image: kubernetesui/dashboard:v2.0.0-beta8
您在 /var/spool/mail/root 中有新邮件
[root@node1 software]# docker load -i metrics-scrapter-1-0-1.tar.gz 
89ac18ee460b: Loading layer [==================================================>]  238.6kB/238.6kB
878c5d3194b0: Loading layer [==================================================>]  39.87MB/39.87MB
1dc71700363a: Loading layer [==================================================>]  2.048kB/2.048kB
Loaded image: kubernetesui/metrics-scraper:v1.0.1

[root@node2 software]# docker load -i dashboard_2_0_0.tar.gz 
954115f32d73: Loading layer [==================================================>]  91.22MB/91.22MB
Loaded image: kubernetesui/dashboard:v2.0.0-beta8
您在 /var/spool/mail/root 中有新邮件
[root@node2 software]# docker load -i metrics-scrapter-1-0-1.tar.gz 
89ac18ee460b: Loading layer [==================================================>]  238.6kB/238.6kB
878c5d3194b0: Loading layer [==================================================>]  39.87MB/39.87MB
1dc71700363a: Loading layer [==================================================>]  2.048kB/2.048kB
Loaded image: kubernetesui/metrics-scraper:v1.0.1

在master1机器上执行

[root@master1 software]# kubectl apply -f kubernetes-dashboard.yaml 
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

# 查看dashboard的状态
[root@master1 software]# kubectl get pods -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-7445d59dfd-67dc8   1/1     Running   0          3m35s
kubernetes-dashboard-54f5b6dc4b-sw5zp        1/1     Running   0          3m35s

# 查看dashboar的service
[root@master1 software]# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
dashboard-metrics-scraper   ClusterIP   10.111.105.209   <none>        8000/TCP   5m27s
kubernetes-dashboard        ClusterIP   10.106.23.184    <none>        443/TCP    5m28s  

# 修改service type类型变成NodePort,把type: ClusterIP变成 type: NodePort,保存退出即可
[root@master1 software]# kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
service/kubernetes-dashboard edited

[root@master1 software]# kubectl get svc -n kubernetes-dashboard                      
NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.111.105.209   <none>        8000/TCP        7m36s
kubernetes-dashboard        NodePort    10.106.23.184    <none>        443:32438/TCP   7m37s

浏览器上访问:https://192.168.31.16:32438 (k8s集群中任意一个机器节点ip都可)
在这里插入图片描述

8.2 通过token令牌访问dashboard

在master1上执行

创建管理员token,具有查看任何空间的权限,可以管理所有资源对象

[root@master1 software]# kubectl create clusterrolebinding dashboard-cluster-admin --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:kubernetes-dashboard
clusterrolebinding.rbac.authorization.k8s.io/dashboard-cluster-admin created

查看kubernetes-dashboard名称空间下的secret

[root@master1 software]# kubectl get secret -n kubernetes-dashboard
NAME                               TYPE                                  DATA   AGE
default-token-z892f                kubernetes.io/service-account-token   3      17m
kubernetes-dashboard-certs         Opaque                                0      17m
kubernetes-dashboard-csrf          Opaque                                1      17m
kubernetes-dashboard-key-holder    Opaque                                2      17m
kubernetes-dashboard-token-vgq54   kubernetes.io/service-account-token   3      17m

找到对应的带有token的kubernetes-dashboard-token-vgq54

[root@master1 software]# kubectl  describe  secret kubernetes-dashboard-token-vgq54 -n   kubernetes-dashboard
Name:         kubernetes-dashboard-token-vgq54
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: kubernetes-dashboard
              kubernetes.io/service-account.uid: 07b74c22-be74-4c03-8eb5-3e6234f526a2

Type:  kubernetes.io/service-account-token

Data
====
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IjVqZmFYVlkyOEpIeHFmLTd0OWp5SWhTSzZsZ3FrQ2tWRk1hLS1HV1FRdlkifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi12Z3E1NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjA3Yjc0YzIyLWJlNzQtNGMwMy04ZWI1LTNlNjIzNGY1MjZhMiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.1au9LdxbLh_xxG5Szwzuo4aZgYvg0WbXC3mejI--VsrzZe_ir9EEADny9FpMLOZShDSaId9i6eBR1-nivBLBN_iURac0EIHhCvdgKLi0CNHqoZ5mmqYwxtfCoSNq0ceQQ19w09Vw2hRXaCE1-9h6ZreiI21CreZVmU_IyAlM-RqjhkzQx5g0Hv4FkXZIicWYQ2kFKcacP3DhHfx-VPdNpRaXtM6aAsRt5Ccz2j2yb1dSo_c1ylF6tu1-_7eHFgj2FEQ_z0rfCESbtGuEbpHpVKrG9dMX9nk9dtLigtcjNsmdLdWM2V-nn2hN952qyxDjubZiG0JOPWiYDC8NqYh_4g
ca.crt:     1066 bytes

将token后面的值,复制到浏览器dashboard token输入处即可登录

eyJhbGciOiJSUzI1NiIsImtpZCI6IjVqZmFYVlkyOEpIeHFmLTd0OWp5SWhTSzZsZ3FrQ2tWRk1hLS1HV1FRdlkifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi12Z3E1NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjA3Yjc0YzIyLWJlNzQtNGMwMy04ZWI1LTNlNjIzNGY1MjZhMiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.1au9LdxbLh_xxG5Szwzuo4aZgYvg0WbXC3mejI--VsrzZe_ir9EEADny9FpMLOZShDSaId9i6eBR1-nivBLBN_iURac0EIHhCvdgKLi0CNHqoZ5mmqYwxtfCoSNq0ceQQ19w09Vw2hRXaCE1-9h6ZreiI21CreZVmU_IyAlM-RqjhkzQx5g0Hv4FkXZIicWYQ2kFKcacP3DhHfx-VPdNpRaXtM6aAsRt5Ccz2j2yb1dSo_c1ylF6tu1-_7eHFgj2FEQ_z0rfCESbtGuEbpHpVKrG9dMX9nk9dtLigtcjNsmdLdWM2V-nn2hN952qyxDjubZiG0JOPWiYDC8NqYh_4g

点击sign in登录成功后,就可在dashboard页面中看到和操作任何名称空间中的资源。
在这里插入图片描述

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐