一、环境规划

1.1、实验环境规划

K8S集群角色Ip主机名安装的组件
控制节点192.168.40.180k8s-master1apiserver、controller-manager、scheduler、etcd、docker、kubelet、kube-proxy、keepalived、nginx、calico
控制节点192.168.40.181k8s-master2apiserver、controller-manager、scheduler、etcd、docker、kubelet、kube-proxy、keepalived、nginx、calico
工作节点192.168.40.182k8s-node1kubelet、kube-proxy、docker、calico、coredns
Vip192.168.40.199

实验环境规划:

  • 操作系统:centos7.6
  • 配置: 4Gib内存/4vCPU/100G硬盘
  • 网络:Vmware NAT模式

k8s网络环境规划:

  • k8s版本:v1.20.6

  • Pod网段:10.244.0.0/16

  • Service网段:10.10.0.0/16

1.2、节点初始化

1)配置静态ip地址

# 把虚拟机或者物理机配置成静态ip地址,这样机器重新启动后ip地址也不会发生改变。以master1主机修改静态IP为例
~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 
TYPE=Ethernet
BOOTPROTO=none
NAME=eth0
DEVICE=eth0
ONBOOT=yes
IPADDR=192.168.40.180	# 按实验规划修改
NETMASK=255.255.255.0
GATEWAY=192.168.40.2
DNS1=223.5.5.5

# 重启网络
~]# systemctl restart network

# 测试网络连通信
~]# ping baidu.com
PING baidu.com (39.156.69.79) 56(84) bytes of data.
64 bytes from 39.156.69.79 (39.156.69.79): icmp_seq=1 ttl=128 time=63.2 ms
64 bytes from 39.156.69.79 (39.156.69.79): icmp_seq=2 ttl=128 time=47.3 ms

2)配置主机名

~]# hostnamectl set-hostname <主机名> && bash

3)配置hosts文件

# 所有机器
cat >> /etc/hosts << EOF 
192.168.40.180 k8s-master1
192.168.40.181 k8s-master2
192.168.40.182 k8s-node1 
EOF

# 测试
~]# ping k8s-master1
PING k8s-master1 (192.168.40.180) 56(84) bytes of data.
64 bytes from k8s-master1 (192.168.40.180): icmp_seq=1 ttl=64 time=0.015 ms
64 bytes from k8s-master1 (192.168.40.180): icmp_seq=2 ttl=64 time=0.047 ms

4)配置主机之间无密码登录

# 生成ssh 密钥对,一路回车,不输入密码
ssh-keygen -t rsa

# 把本地的ssh公钥文件安装到远程主机对应的账户
ssh-copy-id -i .ssh/id_rsa.pub k8s-master1
ssh-copy-id -i .ssh/id_rsa.pub k8s-master2
ssh-copy-id -i .ssh/id_rsa.pub k8s-node1

5)关闭firewalld防火墙

systemctl stop firewalld && systemctl disable firewalld

6)关闭selinux

# 临时关闭
setenforce 0
# 永久关闭
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
# 查看
getenforce

7)关闭交换分区swap

#临时关闭
swapoff -a
#永久关闭:注释swap挂载,给swap开头加一下注释
sed -ri 's/.*swap.*/#&/' /etc/fstab
#注意:如果是克隆的虚拟机,需要删除UUID一行

问题一:为什么要关闭swap交换分区?

Swap是交换分区,如果机器内存不够,会使用swap分区,但是swap分区的性能较低,k8s设计的时候为了能提升性能,默认是不允许使用交换分区的。Kubeadm初始化的时候会检测swap是否关闭,如果没关闭,那就初始化失败。如果不想要关闭交换分区,安装k8s的时候可以指定--ignore-preflight-errors=Swap来解决。

8)修改内核参数

# 1、加载br_netfilter模块
modprobe br_netfilter

# 2、验证模块是否加载成功
lsmod |grep br_netfilter

# 3、修改内核参数
cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

# 4、使刚才修改的内核参数生效
sysctl -p /etc/sysctl.d/k8s.conf

9)配置阿里云repo源

# 备份
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup

# 下载新的CentOS-Base.repo
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

# 生成缓存
yum clean all && yum makecache

10)配置时间同步

# 安装ntpdate命令,
yum install ntpdate -y

# 跟网络源做同步
ntpdate cn.pool.ntp.org

# 把时间同步做成计划任务
crontab -e
* */1 * * * /usr/sbin/ntpdate   cn.pool.ntp.org

# 重启crond服务
service crond restart

11)安装iptables

# 安装iptables
yum install iptables-services -y

# 禁用iptables
service iptables stop && systemctl disable iptables

# 清空防火墙规则
iptables -F

12)开启ipvs

不开启ipvs将会使用iptables进行数据包转发,但是效率低,所以官网推荐需要开通ipvs。

# 创建ipvs.modules文件
~]# vim /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in ${ipvs_modules}; do
 /sbin/modinfo -F filename ${kernel_module} > /dev/null 2>&1
 if [ 0 -eq 0 ]; then
 /sbin/modprobe ${kernel_module}
 fi
done

# 执行脚本
~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs
ip_vs_ftp              13079  0 
nf_nat                 26787  1 ip_vs_ftp
ip_vs_sed              12519  0 
ip_vs_nq               12516  0 
ip_vs_sh               12688  0 
ip_vs_dh               12688  0 
ip_vs_lblcr            12922  0 
ip_vs_lblc             12819  0 
ip_vs_wrr              12697  0 
ip_vs_rr               12600  0 
ip_vs_wlc              12519  0 
ip_vs_lc               12516  0 
ip_vs                 141092  22 ip_vs_dh,ip_vs_lc,ip_vs_nq,ip_vs_rr,ip_vs_sh,ip_vs_ftp,ip_vs_sed,ip_vs_wlc,ip_vs_wrr,ip_vs_lblcr,ip_vs_lblc
nf_conntrack          133387  2 ip_vs,nf_nat
libcrc32c              12644  4 xfs,ip_vs,nf_nat,nf_conntrack

注意事项:

# ipvs是什么?
	ipvs (IP Virtual Server) 实现了传输层负载均衡,也就是我们常说的4层LAN交换,作为 Linux 内核的一部分。ipvs运行在主机上,在真实服务器集群前充当负载均衡器。ipvs可以将基于TCP和UDP的服务请求转发到真实服务器上,并使真实服务器的服务在单个 IP 地址上显示为虚拟服务。

# ipvs和iptable对比分析
	kube-proxy支持 iptables 和 ipvs 两种模式, 在kubernetes v1.8 中引入了 ipvs 模式,在 v1.9 中处于 beta 阶段,在 v1.11 中已经正式可用了。iptables 模式在 v1.1 中就添加支持了,从 v1.2 版本开始 iptables 就是 kube-proxy 默认的操作模式,ipvs 和 iptables 都是基于netfilter的,但是ipvs采用的是hash表,因此当service数量达到一定规模时,hash查表的速度优势就会显现出来,从而提高service的服务性能。那么 ipvs 模式和 iptables 模式之间有哪些差异呢?
1、ipvs 为大型集群提供了更好的可扩展性和性能
2、ipvs 支持比 iptables 更复杂的复制均衡算法(最小负载、最少连接、加权等等)
3、ipvs 支持服务器健康检查和连接重试等功能

13)安装基础软件包

~]# yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel  python-devel epel-release openssh-server socat  ipvsadm conntrack ntpdate telnet rsync

14)安装docker-ce

~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
~]# yum install docker-ce docker-ce-cli containerd.io -y
~]# systemctl start docker && systemctl enable docker.service && systemctl status docker

15)配置docker镜像加速器

# 注意:修改docker文件驱动为systemd,默认为cgroupfs,kubelet默认使用systemd,两者必须一致才可以
~]# tee /etc/docker/daemon.json << 'EOF'
{
 "registry-mirrors":["https://rsbud4vc.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com","http://qtid6917.mirror.aliyuncs.com", "https://rncxm540.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
} 
EOF

~]# systemctl daemon-reload && systemctl restart docker && systemctl status docker

二、部署nginx及keepalived

1)安装nginx,keepalived

# 在k8s-master1和k8s-master2上做nginx主备安装
[root@k8s-master1 ~]#  yum install nginx keepalived -y
[root@k8s-master2 ~]#  yum install nginx keepalived -y

# 注意需要安装如下模块,报错:nginx: [emerg] unknown directive "stream" in /etc/nginx/nginx.conf:13
[root@k8s-master1 ~]# yum install nginx-mod-stream -y
[root@k8s-master1 ~]# nginx -v
nginx version: nginx/1.20.1

2)修改nginx配置文件,主备一样

[root@k8s-master1 ~]# cat /etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 1024;
}

# 四层负载均衡,为两台Master apiserver组件提供负载均衡
stream {

    log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';

    access_log  /var/log/nginx/k8s-access.log  main;

    upstream k8s-apiserver {
       server 192.168.40.180:6443;   # k8s-master1 APISERVER IP:PORT
       server 192.168.40.181:6443;   # k8s-master2 APISERVER IP:PORT
    }
    
    server {
       listen 16443; # 由于nginx与master节点复用,这个监听端口不能是6443,否则会冲突
       proxy_pass k8s-apiserver;
    }
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;

    server {
        listen       80 default_server;
        server_name  _;

        location / {
        }
    }
}

[root@k8s-master2 ~]# cat /etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 1024;
}

# 四层负载均衡,为两台Master apiserver组件提供负载均衡
stream {

    log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';

    access_log  /var/log/nginx/k8s-access.log  main;

    upstream k8s-apiserver {
       server 192.168.40.180:6443;   # k8s-master1 APISERVER IP:PORT
       server 192.168.40.181:6443;   # k8s-master2 APISERVER IP:PORT

    }
    
    server {
       listen 16443; # 由于nginx与master节点复用,这个监听端口不能是6443,否则会冲突
       proxy_pass k8s-apiserver;
    }
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;

    server {
        listen       80 default_server;
        server_name  _;

        location / {
        }
    }
}

3)keepalive配置

# master上配置
[root@k8s-master1 ~]# cat /etc/keepalived/keepalived.conf 
global_defs { 
   notification_email { 
     acassen@firewall.loc 
     failover@firewall.loc 
     sysadmin@firewall.loc 
   } 
   notification_email_from Alexandre.Cassen@firewall.loc  
   smtp_server 127.0.0.1 
   smtp_connect_timeout 30 
   router_id NGINX_MASTER
} 

vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}

vrrp_instance VI_1 { 
    state MASTER 
    interface eth0  # 修改为实际网卡名
    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的 
    priority 100    # 优先级,备服务器设置 90 
    advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒 
    authentication { 
        auth_type PASS      
        auth_pass 1111 
    }  
    # 虚拟IP
    virtual_ipaddress { 
        192.168.40.199/24
    } 
    track_script {
        check_nginx
    } 
}

# master节点脚本配置
[root@k8s-master1 ~]# cat /etc/keepalived/check_nginx.sh 
#!/bin/bash
count=$(ps -ef |grep nginx | grep sbin | egrep -cv "grep|$$")
if [ "$count" -eq 0 ];then
    systemctl stop keepalived
fi

[root@k8s-master1 ~]# chmod +x  /etc/keepalived/check_nginx.sh


# 备节点配置
[root@k8s-master2 ~]# cat /etc/keepalived/keepalived.conf 
global_defs { 
   notification_email { 
     acassen@firewall.loc 
     failover@firewall.loc 
     sysadmin@firewall.loc 
   } 
   notification_email_from Alexandre.Cassen@firewall.loc  
   smtp_server 127.0.0.1 
   smtp_connect_timeout 30 
   router_id NGINX_BACKUP
} 

vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}

vrrp_instance VI_1 { 
    state BACKUP 
    interface eth0
    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的 
    priority 90
    advert_int 1
    authentication { 
        auth_type PASS      
        auth_pass 1111 
    }  
    virtual_ipaddress { 
        192.168.40.199/24
    } 
    track_script {
        check_nginx
    } 
}


[root@k8s-master2 ~]# cat /etc/keepalived/check_nginx.sh 
#!/bin/bash
count=$(ps -ef |grep nginx | grep sbin | egrep -cv "grep|$$")
if [ "$count" -eq 0 ];then
    systemctl stop keepalived
fi
[root@k8s-master2 ~]# chmod +x /etc/keepalived/check_nginx.sh
#注:keepalived根据脚本返回状态码(0为工作正常,非0不正常)判断是否故障转移。

4)启动服务

[root@k8s-master1 ~]# systemctl daemon-reload && systemctl start nginx keepalived && systemctl enable nginx keepalived
[root@k8s-master2 ~]# systemctl daemon-reload && systemctl start nginx keepalived && systemctl enable nginx keepalived

5)测试vip是否绑定成功

[root@k8s-master1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:52:bf:68 brd ff:ff:ff:ff:ff:ff
    inet 192.168.40.180/24 brd 192.168.40.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 192.168.40.199/24 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe52:bf68/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:fc:92:c8:72 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
       
       
[root@k8s-master2 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:f1:81:61 brd ff:ff:ff:ff:ff:ff
    inet 192.168.40.181/24 brd 192.168.40.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fef1:8161/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:c6:90:ba:4c brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever

6)测试keepalived

# 停掉k8s-master1上的nginx,vip会漂移到k8s-master2
[root@k8s-master1 ~]# systemctl stop nginx
[root@k8s-master1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:52:bf:68 brd ff:ff:ff:ff:ff:ff
    inet 192.168.40.180/24 brd 192.168.40.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe52:bf68/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:fc:92:c8:72 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
       
[root@k8s-master2 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:f1:81:61 brd ff:ff:ff:ff:ff:ff
    inet 192.168.40.181/24 brd 192.168.40.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 192.168.40.199/24 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fef1:8161/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:c6:90:ba:4c brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
       
# 重新启动master1的nginx和keepalived,vip会漂移回来
[root@k8s-master1 ~]# systemctl start nginx
[root@k8s-master1 ~]# systemctl start keepalived
[root@k8s-master1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:52:bf:68 brd ff:ff:ff:ff:ff:ff
    inet 192.168.40.180/24 brd 192.168.40.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 192.168.40.199/24 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe52:bf68/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:fc:92:c8:72 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever

7)查看端口状态

[root@k8s-master1 ~]# netstat -lntp|grep 16443
tcp        0      0 0.0.0.0:16443           0.0.0.0:*               LISTEN      22461/nginx: master
[root@k8s-master2 ~]# netstat -lntp|grep 16443
tcp        0      0 0.0.0.0:16443           0.0.0.0:*               LISTEN      22461/nginx: master

三、kubeadm部署集群

3.1、配置kubernetes的repo源

[root@k8s-master1 ~]# vim  /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0

# 将k8s-master1上Kubernetes的repo源复制给k8s-master2和k8s-node1
[root@k8s-master1 ~]# scp /etc/yum.repos.d/kubernetes.repo k8s-master2:/etc/yum.repos.d/
[root@k8s-master1 ~]# scp /etc/yum.repos.d/kubernetes.repo k8s-node1:/etc/yum.repos.d/

3.2、下载相关软件包

# 注意:可以看到kubelet状态不是running状态,这个是正常的,不用管,等k8s组件起来这个kubelet就正常了
[root@k8s-master1 ~]# yum install -y kubelet-1.20.6 kubeadm-1.20.6 kubectl-1.20.6
[root@k8s-master1 ~]# systemctl enable kubelet && systemctl start kubelet
[root@k8s-master1 ~]# systemctl status kubelet

[root@k8s-master2 ~]# yum install -y kubelet-1.20.6 kubeadm-1.20.6 kubectl-1.20.6
[root@k8s-master2 ~]# systemctl enable kubelet && systemctl start kubelet
[root@k8s-master2 ~]# systemctl status kubelet

[root@k8s-node1 ~]# yum install -y kubelet-1.20.6 kubeadm-1.20.6 kubectl-1.20.6
[root@k8s-node1 ~]# systemctl enable kubelet && systemctl start kubelet
[root@k8s-node1 ~]# systemctl status kubelet

3.3、kubeadm初始化k8s集群

1)创建kubeadm-config.yaml文件

[root@k8s-master1 ~]# vim kubeadm-config.yaml 
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.20.6
controlPlaneEndpoint: 192.168.40.199:16443
imageRepository: registry.aliyuncs.com/google_containers
apiServer:
 certSANs:
 - 192.168.40.180
 - 192.168.40.181
 - 192.168.40.182
 - 192.168.40.199
networking:
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.10.0.0/16
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind:  KubeProxyConfiguration
mode: ipvs

参数--image-repository registry.aliyuncs.com/google_containers说明::手动指定仓库地址为registry.aliyuncs.com/google_containers,kubeadm默认从k8s.grc.io拉取镜像,但是k8s.gcr.io访问不到,所以需要指定从registry.aliyuncs.com/google_containers仓库拉取镜像

2)使用kubeadm初始化k8s集群

[root@k8s-master1 ~]# kubeadm init --config kubeadm-config.yaml
[init] Using Kubernetes version: v1.20.6
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.10.0.1 192.168.40.180 192.168.40.199 192.168.40.181 192.168.40.182]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master1 localhost] and IPs [192.168.40.180 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master1 localhost] and IPs [192.168.40.180 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 113.537013 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master1 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node k8s-master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: d8j1ts.o62xh6zi98031f5l
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 192.168.40.199:16443 --token d8j1ts.o62xh6zi98031f5l \
    --discovery-token-ca-cert-hash sha256:1fa4f67a0e1ee0c3277f10929df562b0fa621cc44ffb16ff8fadb52c667f0a1b \
    --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.40.199:16443 --token d8j1ts.o62xh6zi98031f5l \
    --discovery-token-ca-cert-hash sha256:1fa4f67a0e1ee0c3277f10929df562b0fa621cc44ffb16ff8fadb52c667f0a1b

3)配置kubectl的配置文件config

[root@k8s-master1 ~]# mkdir -p $HOME/.kube
[root@k8s-master1 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master1 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master1 ~]# kubectl get nodes
NAME          STATUS     ROLES                  AGE     VERSION
k8s-master1   NotReady   control-plane,master   2m27s   v1.20.6
# 此时集群状态还是NotReady状态,因为没有安装网络插件。

3.4、扩容集群-添加master节点

1)创建证书存放目录

[root@k8s-master2 ~]# cd /root && mkdir -p /etc/kubernetes/pki/etcd && mkdir -p ~/.kube/

2)拷贝证书

[root@k8s-master1 ~]# scp /etc/kubernetes/pki/ca.crt k8s-master2:/etc/kubernetes/pki/
[root@k8s-master1 ~]# scp /etc/kubernetes/pki/ca.key k8s-master2:/etc/kubernetes/pki/

[root@k8s-master1 ~]# scp /etc/kubernetes/pki/sa.key k8s-master2:/etc/kubernetes/pki/
[root@k8s-master1 ~]# scp /etc/kubernetes/pki/sa.pub k8s-master2:/etc/kubernetes/pki/

[root@k8s-master1 ~]# scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-master2:/etc/kubernetes/pki/
[root@k8s-master1 ~]# scp /etc/kubernetes/pki/front-proxy-ca.key k8s-master2:/etc/kubernetes/pki/

[root@k8s-master1 ~]# scp /etc/kubernetes/pki/etcd/ca.crt k8s-master2:/etc/kubernetes/pki/etcd/
[root@k8s-master1 ~]# scp /etc/kubernetes/pki/etcd/ca.key k8s-master2:/etc/kubernetes/pki/etcd/
[root@k8s-master1 ~]# scp /etc/kubernetes/pki/ca.crt k8s-master3:/etc/kubernetes/pki/
[root@k8s-master1 ~]# scp /etc/kubernetes/pki/ca.key k8s-master3:/etc/kubernetes/pki/

[root@k8s-master1 ~]# scp /etc/kubernetes/pki/sa.key k8s-master3:/etc/kubernetes/pki/
[root@k8s-master1 ~]# scp /etc/kubernetes/pki/sa.pub k8s-master3:/etc/kubernetes/pki/

[root@k8s-master1 ~]# scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-master3:/etc/kubernetes/pki/
[root@k8s-master1 ~]# scp /etc/kubernetes/pki/front-proxy-ca.key k8s-master3:/etc/kubernetes/pki/

[root@k8s-master1 ~]# scp /etc/kubernetes/pki/etcd/ca.crt k8s-master3:/etc/kubernetes/pki/etcd/
[root@k8s-master1 ~]# scp /etc/kubernetes/pki/etcd/ca.key k8s-master3:/etc/kubernetes/pki/etcd/

3)新控制节点加入集群

# 查看token,注意控制节点加入集群需要加上--control-plane
[root@k8s-master1 ~]# kubeadm token create --print-join-command
kubeadm join 192.168.40.199:16443 --token qh0gw4.2brd8ioh2hyscd1a     --discovery-token-ca-cert-hash sha256:1fa4f67a0e1ee0c3277f10929df562b0fa621cc44ffb16ff8fadb52c667f0a1b

# 新节点加入
[root@k8s-master2 ~]# kubeadm join 192.168.40.199:16443 --token qh0gw4.2brd8ioh2hyscd1a     --discovery-token-ca-cert-hash sha256:1fa4f67a0e1ee0c3277f10929df562b0fa621cc44ffb16ff8fadb52c667f0a1b --control-plane
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master2 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.10.0.1 192.168.40.181 192.168.40.199 192.168.40.180 192.168.40.182]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master2 localhost] and IPs [192.168.40.181 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master2 localhost] and IPs [192.168.40.181 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node k8s-master2 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node k8s-master2 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

	mkdir -p $HOME/.kube
	sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

4)创建kubeconfig

[root@k8s-master2 ~]# mkdir -p $HOME/.kube
[root@k8s-master2 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master2 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

5)查看集群状态

[root@k8s-master1 ~]# kubectl get nodes
NAME          STATUS     ROLES                  AGE     VERSION
k8s-master1   NotReady   control-plane,master   24m     v1.20.6
k8s-master2   NotReady   control-plane,master   3m41s   v1.20.6

3.5、扩容集群-添加node节点

1)加入集群

# 查看加入集群命令
[root@k8s-master1 ~]# kubeadm token create --print-join-command
kubeadm join 192.168.40.199:16443 --token ay4uyg.1x09kgx6ihjii29c     --discovery-token-ca-cert-hash sha256:1fa4f67a0e1ee0c3277f10929df562b0fa621cc44ffb16ff8fadb52c667f0a1b

# 新node节点加入集群
[root@k8s-node1 ~]# kubeadm join 192.168.40.199:16443 --token ay4uyg.1x09kgx6ihjii29c     --discovery-token-ca-cert-hash sha256:1fa4f67a0e1ee0c3277f10929df562b0fa621cc44ffb16ff8fadb52c667f0a1b
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

2)查看状态并打标签

[root@k8s-master1 ~]# kubectl get nodes
NAME          STATUS     ROLES                  AGE   VERSION
k8s-master1   NotReady   control-plane,master   36m   v1.20.6
k8s-master2   NotReady   control-plane,master   16m   v1.20.6
k8s-node1     NotReady   <none>                 93s   v1.20.6
[root@k8s-master1 ~]# kubectl label node k8s-node1 node-role.kubernetes.io/worker=worker
node/k8s-node1 labeled
[root@k8s-master1 ~]# kubectl get nodes
NAME          STATUS     ROLES                  AGE    VERSION
k8s-master1   NotReady   control-plane,master   36m    v1.20.6
k8s-master2   NotReady   control-plane,master   16m    v1.20.6
k8s-node1     NotReady   worker                 107s   v1.20.6

# 上面状态都是NotReady状态,说明没有安装网络插件

3)查看pod状态

[root@k8s-master1 ~]# kubectl get pods -n kube-system
NAME                                  READY   STATUS    RESTARTS   AGE
coredns-7f89b7bc75-5d8vn              0/1     Pending   0          37m	# 网络插件未安装好
coredns-7f89b7bc75-xvkth              0/1     Pending   0          37m
etcd-k8s-master1                      1/1     Running   0          37m
etcd-k8s-master2                      1/1     Running   0          17m
kube-apiserver-k8s-master1            1/1     Running   1          37m
kube-apiserver-k8s-master2            1/1     Running   0          17m
kube-controller-manager-k8s-master1   1/1     Running   1          37m
kube-controller-manager-k8s-master2   1/1     Running   0          17m
kube-proxy-4r7kf                      1/1     Running   0          37m
kube-proxy-6mwh6                      1/1     Running   0          17m
kube-proxy-qsbp5                      1/1     Running   0          2m44s
kube-scheduler-k8s-master1            1/1     Running   1          37m
kube-scheduler-k8s-master2            1/1     Running   0          17m

3.6、安装Calico

下载地址:https://docs.projectcalico.org/manifests/calico.yaml

[root@k8s-master1 ~]# kubectl apply -f calico.yaml
[root@k8s-master1 ~]# kubectl get pod -n kube-system 
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-6949477b58-lb52j   1/1     Running   0          3m58s
calico-node-9rqdd                          1/1     Running   0          3m58s
calico-node-xdr5t                          1/1     Running   0          3m58s
calico-node-xvkv5                          1/1     Running   0          3m58s
coredns-7f89b7bc75-5d8vn                   1/1     Running   0          49m
coredns-7f89b7bc75-xvkth                   1/1     Running   0          49m
etcd-k8s-master1                           1/1     Running   0          49m
etcd-k8s-master2                           1/1     Running   0          29m
kube-apiserver-k8s-master1                 1/1     Running   1          49m
kube-apiserver-k8s-master2                 1/1     Running   0          29m
kube-controller-manager-k8s-master1        1/1     Running   1          49m
kube-controller-manager-k8s-master2        1/1     Running   0          29m
kube-proxy-4r7kf                           1/1     Running   0          49m
kube-proxy-6mwh6                           1/1     Running   0          29m
kube-proxy-qsbp5                           1/1     Running   0          14m
kube-scheduler-k8s-master1                 1/1     Running   1          49m
kube-scheduler-k8s-master2                 1/1     Running   0          29m
[root@k8s-master1 ~]# kubectl get nodes
NAME          STATUS   ROLES                  AGE   VERSION
k8s-master1   Ready    control-plane,master   50m   v1.20.6
k8s-master2   Ready    control-plane,master   29m   v1.20.6
k8s-node1     Ready    worker                 15m   v1.20.6

测试网络连通性:

[root@k8s-master1 ~]# kubectl get pod -n kube-system -o wide
NAME                                       READY   STATUS    RESTARTS   AGE     IP               NODE          NOMINATED NODE   READINESS GATES
calico-kube-controllers-6949477b58-lb52j   1/1     Running   0          5m49s   10.244.36.65     k8s-node1     <none>           <none>
calico-node-9rqdd                          1/1     Running   0          5m49s   192.168.40.182   k8s-node1     <none>           <none>
calico-node-xdr5t                          1/1     Running   0          5m49s   192.168.40.180   k8s-master1   <none>           <none>
calico-node-xvkv5                          1/1     Running   0          5m49s   192.168.40.181   k8s-master2   <none>           <none>
coredns-7f89b7bc75-5d8vn                   1/1     Running   0          51m     10.244.36.67     k8s-node1     <none>           <none>
coredns-7f89b7bc75-xvkth                   1/1     Running   0          51m     10.244.36.66     k8s-node1     <none>           <none>
etcd-k8s-master1                           1/1     Running   0          51m     192.168.40.180   k8s-master1   <none>           <none>
etcd-k8s-master2                           1/1     Running   0          30m     192.168.40.181   k8s-master2   <none>           <none>
kube-apiserver-k8s-master1                 1/1     Running   1          51m     192.168.40.180   k8s-master1   <none>           <none>
kube-apiserver-k8s-master2                 1/1     Running   0          30m     192.168.40.181   k8s-master2   <none>           <none>
kube-controller-manager-k8s-master1        1/1     Running   1          51m     192.168.40.180   k8s-master1   <none>           <none>
kube-controller-manager-k8s-master2        1/1     Running   0          31m     192.168.40.181   k8s-master2   <none>           <none>
kube-proxy-4r7kf                           1/1     Running   0          51m     192.168.40.180   k8s-master1   <none>           <none>
kube-proxy-6mwh6                           1/1     Running   0          31m     192.168.40.181   k8s-master2   <none>           <none>
kube-proxy-qsbp5                           1/1     Running   0          16m     192.168.40.182   k8s-node1     <none>           <none>
kube-scheduler-k8s-master1                 1/1     Running   1          51m     192.168.40.180   k8s-master1   <none>           <none>
kube-scheduler-k8s-master2                 1/1     Running   0          31m     192.168.40.181   k8s-master2   <none>           <none>

# 注意使用busybox:1.28版本
[root@k8s-master1 ~]# kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh
If you don't see a command prompt, try pressing enter.
/ # ping 10.244.36.67
PING 10.244.36.67 (10.244.36.67): 56 data bytes
64 bytes from 10.244.36.67: seq=0 ttl=63 time=0.113 ms
64 bytes from 10.244.36.67: seq=1 ttl=63 time=0.203 ms
/ # ping baidu.com
PING baidu.com (39.156.69.79): 56 data bytes
64 bytes from 39.156.69.79: seq=0 ttl=127 time=47.840 ms
64 bytes from 39.156.69.79: seq=1 ttl=127 time=62.833 ms

3.7、测试部署tomcat服务

[root@k8s-master1 ~]# cat tomcat.yaml
apiVersion: v1
kind: Pod
metadata:
  name: demo-pod
  namespace: default
  labels:
    app: myapp
    env: dev
spec:
  containers:
  - name:  tomcat-pod-java
    ports:
    - containerPort: 8080
    image: tomcat:8.5-jre8-alpine
    imagePullPolicy: IfNotPresent
  - name: busybox
    image: busybox:latest
    command:
    - "/bin/sh"
    - "-c"
    - "sleep 3600"
[root@k8s-master1 ~]# cat tomcat-service.yaml 
apiVersion: v1
kind: Service
metadata:
  name: tomcat
spec:
  type: NodePort
  ports:
    - port: 8080
      nodePort: 30080
  selector:
    app: myapp
    env: dev
    
[root@k8s-master1 ~]# kubectl apply -f tomcat.yaml
pod/demo-pod created
[root@k8s-master1 ~]# kubectl apply -f tomcat-service.yaml
service/tomcat created

[root@k8s-master1 ~]# kubectl get pods
NAME       READY   STATUS    RESTARTS   AGE
demo-pod   2/2     Running   0          116s
[root@k8s-master1 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
kubernetes   ClusterIP   10.10.0.1       <none>        443/TCP          59m
tomcat       NodePort    10.10.235.180   <none>        8080:30080/TCP   114s

浏览器访问测试:

image-20210708095621739

3.8、测试coredns服务

# 注意:busybox要用指定的1.28版本,不能用最新版本,最新版本,nslookup会解析不到dns和ip 
[root@k8s-master1 ~]# kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh
If you don't see a command prompt, try pressing enter.
/ # nslookup kubernetes.default.svc.cluster.local
Server:    10.10.0.10
Address 1: 10.10.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes.default.svc.cluster.local
Address 1: 10.10.0.1 kubernetes.default.svc.cluster.local

/ # nslookup tomcat.default.svc.cluster.local
Server:    10.10.0.10
Address 1: 10.10.0.10 kube-dns.kube-system.svc.cluster.local

Name:      tomcat.default.svc.cluster.local
Address 1: 10.10.235.180 tomcat.default.svc.cluster.local

 
五、实验总结

        1、集群中只要有一个master节点正常运行就可以正常对外提供业务服务。

        2、如果需要在master节点使用kubectl相关的命令,必须保证至少有2个master节点正常运行才可以使用,不然会有 Unable to connect to the server: net/http: TLS handshake timeout 这样的错误。

        3、当一台可以查看nodes节点的master宕机之后,其余两台随机一台获取vip,然后可以观察nodes节点,但是当超过两台master宕机之后,集群需重建才可以观察nodes节点,但服务未停止;当两台宕机主机回复之后,服务停止,node节点不可观察,集群停止,需重建!

        4、Node节点故障时pod自动转移:当pod所在的Node节点宕机后,根据 controller-manager的–pod-eviction-timeout 配置,默认是5分钟,5分钟后k8s会把pod状态设置为unkown, 然后在其它节点启动pod。当故障节点恢复后,k8s会删除故障节点上面的unkown pod。如果你想立即强制迁移,可以用 kubectl drain nodename

        5、为了保证集群的高可用性,建议master节点和node节点至少分别部署3台及以上,且master节点应该部署基数个实例(3、5、7、9)。

 多master kubespere 安装

apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.3.0
spec:
  persistence:
    storageClass: ""        #这里保持默认即可,因为我们有了默认的存储类
  authentication:
    jwtSecret: ""           # Keep the jwtSecret consistent with the Host Cluster. Retrieve the jwtSecret by executing "kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep jwtSecret" on the Host Cluster.
  local_registry: ""        # Add your private registry address if it is needed.
  etcd:
    monitoring: true       # 改为"true",表示开启etcd的监控功能
    endpointIps: '17.2.1.220,17.2.1.221,17.2.1.222'  # 改为自己的master节点IP地址
    port: 2379              # etcd port.
    tlsEnable: true
  common:
    redis:
      enabled: true         #改为"true",开启redis功能
    openldap:
      enabled: true         #改为"true",开启轻量级目录协议
    minioVolumeSize: 20Gi # Minio PVC size.
    openldapVolumeSize: 2Gi   # openldap PVC size.
    redisVolumSize: 2Gi # Redis PVC size.
    monitoring:
      # type: external   # Whether to specify the external prometheus stack, and need to modify the endpoint at the next line.
      endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090 # Prometheus endpoint to get metrics data.
    es:   # Storage backend for logging, events and auditing.
      # elasticsearchMasterReplicas: 1   # The total number of master nodes. Even numbers are not allowed.
      # elasticsearchDataReplicas: 1     # The total number of data nodes.
      elasticsearchMasterVolumeSize: 4Gi   # The volume size of Elasticsearch master nodes.
      elasticsearchDataVolumeSize: 20Gi    # The volume size of Elasticsearch data nodes.
      logMaxAge: 7                     # Log retention time in built-in Elasticsearch. It is 7 days by default.
      elkPrefix: logstash              # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log.
      basicAuth:
        enabled: false
        username: ""
        password: ""
      externalElasticsearchUrl: ""
      externalElasticsearchPort: ""
  console:
    enableMultiLogin: true  # Enable or disable simultaneous logins. It allows different users to log in with the same account at the same time.
    port: 30880
  alerting:                # (CPU: 0.1 Core, Memory: 100 MiB) It enables users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from.
    enabled: true         # 改为"true",开启告警功能
    # thanosruler:
    #   replicas: 1
    #   resources: {}
  auditing:                # Provide a security-relevant chronological set of records,recording the sequence of activities happening on the platform, initiated by different tenants.
    enabled: true         #  改为"true",开启审计功能
  devops:                  # (CPU: 0.47 Core, Memory: 8.6 G) Provide an out-of-the-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image & Binary-to-Image.
    enabled: true             # 改为"true",开启DevOps功能
    jenkinsMemoryLim: 2Gi      # Jenkins memory limit.
    jenkinsMemoryReq: 1500Mi   # Jenkins memory request.
    jenkinsVolumeSize: 8Gi     # Jenkins volume size.
    jenkinsJavaOpts_Xms: 512m  # The following three fields are JVM parameters.
    jenkinsJavaOpts_Xmx: 512m
    jenkinsJavaOpts_MaxRAM: 2g
  events:                  # Provide a graphical web console for Kubernetes Events exporting, filtering and alerting in multi-tenant Kubernetes clusters.
    enabled: true         # 改为"true",开启集群的事件功能
    ruler:
      enabled: true
      replicas: 2
  logging:                 # (CPU: 57 m, Memory: 2.76 G) Flexible logging functions are provided for log query, collection and management in a unified console. Additional log collectors can be added, such as Elasticsearch, Kafka and Fluentd.
    enabled: true        # 改为"true",开启日志功能
    logsidecar:
      enabled: true
      replicas: 2
  metrics_server:                    # (CPU: 56 m, Memory: 44.35 MiB) It enables HPA (Horizontal Pod Autoscaler).
    enabled: false                   # 这个不用修改,因为在上面我们已经安装过了,如果这里开启,镜像是官方的,会拉取镜像失败
  monitoring:
    storageClass: ""                 # If there is an independent StorageClass you need for Prometheus, you can specify it here. The default StorageClass is used by default.
    # prometheusReplicas: 1          # Prometheus replicas are responsible for monitoring different segments of data source and providing high availability.
    prometheusMemoryRequest: 400Mi   # Prometheus request memory.
    prometheusVolumeSize: 20Gi       # Prometheus PVC size.
    # alertmanagerReplicas: 1          # AlertManager Replicas.
  multicluster:
    clusterRole: none  # host | member | none  # You can install a solo cluster, or specify it as the Host or Member Cluster.
  network:
    networkpolicy: # Network policies allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods).
      # Make sure that the CNI network plugin used by the cluster supports NetworkPolicy. There are a number of CNI network plugins that support NetworkPolicy, including Calico, Cilium, Kube-router, Romana and Weave Net.
      enabled: true # 改为"true",开启网络策略
    ippool: # Use Pod IP Pools to manage the Pod network address space. Pods to be created can be assigned IP addresses from a Pod IP Pool.
      type: none #如果你的网络插件是calico,需要修改为"calico",这里我是Flannel,保持默认。
    topology: # Use Service Topology to view Service-to-Service communication based on Weave Scope.
      type: none # Specify "weave-scope" for this field to enable Service Topology. "none" means that Service Topology is disabled.
  openpitrix: # An App Store that is accessible to all platform tenants. You can use it to manage apps across their entire lifecycle.
    store:
      enabled: true # 改为"true",开启应用商店
  servicemesh:         # (0.3 Core, 300 MiB) Provide fine-grained traffic management, observability and tracing, and visualized traffic topology.
    enabled: true     # 改为"true",开启微服务治理
  kubeedge:          # Add edge nodes to your cluster and deploy workloads on edge nodes.
    enabled: false   # 这个就不修改了,这个是边缘服务,我们也没有边缘的设备。
    cloudCore:
      nodeSelector: {"node-role.kubernetes.io/worker": ""}
      tolerations: []
      cloudhubPort: "10000"
      cloudhubQuicPort: "10001"
      cloudhubHttpsPort: "10002"
      cloudstreamPort: "10003"
      tunnelPort: "10004"
      cloudHub:
        advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided.
          - ""            # Note that once KubeEdge is enabled, CloudCore will malfunction if the address is not provided.
        nodeLimit: "100"
      service:
        cloudhubNodePort: "30000"
        cloudhubQuicNodePort: "30001"
        cloudhubHttpsNodePort: "30002"
        cloudstreamNodePort: "30003"
        tunnelNodePort: "30004"
    edgeWatcher:
      nodeSelector: {"node-role.kubernetes.io/worker": ""}
      tolerations: []
      edgeWatcherAgent:
        nodeSelector: {"node-role.kubernetes.io/worker": ""}
        tolerations: []

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐