1 kubeadm安装k8s1.25高可用集群

**k8s环境规划:

podSubnet(pod网段)10.244.0.0/16

serviceSubnet(service网段): 10.96.0.0/12

实验环境规划:

操作系统:centos7.9

配置: 4Gib内存/2vCPU/60G硬盘
2Gib内存/1vCPU/60G硬盘
网络:NAT模式

K8S集群角色IP主机名安装的组件
控制节点192.168.109.131masterapiserver、controller-manager、schedule、kubelet、etcd、kube-proxy、容器运行时、calico、keepalived、nginx
工作节点192.168.109.132node1Kube-proxy、calico、coredns、容器运行时、kubelet

1.1 初始化安装k8s集群的实验环境

1.1.1 修改机器IP,变成静态IP

[root@master1 ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens32
[root@master1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens32
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="static"
DEFROUTE="yes"
IPADDR=192.168.109.131
NETMASK=255.255.255.0
GATEWAY=192.168.109.2
DNS1=192.168.109.2
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens32"
UUID="5794589b-def4-4c51-a6fb-34f99e4963cb"
DEVICE="ens32"
ONBOOT="yes"
[root@node1 ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens32 
[root@node1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens32
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="static"
DEFROUTE="yes"
IPADDR=192.168.109.132
NETMASK=255.255.255.0
GATEWAY=192.168.109.2
DNS1=192.168.109.2
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens32"
UUID="4d209e18-b8de-4032-b486-bbabccfc9986"
DEVICE="ens32"
ONBOOT="yes"

1.1.2 关闭selinux,所有k8s机器均操作(可用ansible)

通过ansible-playbook执行selinux.yaml批量关闭selinux

[root@master1 ansible]# cat selinux.yaml 
---
- name: Ensure SELinux is set to enforcing mode
  hosts: all
  tasks:
    - name: line selinux_config
      lineinfile:
        path: /etc/selinux/config
        regexp: '^SELINUX='
        line: SELINUX=disabled
[root@master1 ansible]# ansible all -a "getenforce"
192.168.109.131 | CHANGED | rc=0 >>
Disabled
192.168.109.132 | CHANGED | rc=0 >>
Disabled

1.1.3 配置机器主机名,修改每台机器的/etc/hosts文件,文件最后增加如下内容:

[root@master1 ansible]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.109.131 master1
192.168.109.132 node1
[root@node1 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.109.131 master1
192.168.109.132 node1

1.1.4 配置主机之间无密码登录(前面安装ansible已配置好主机之间免密状态)

1.1.5 关闭firewalld防火墙

编写firewall的yaml文件,批量关闭防火墙服务

[root@master1 ansible]# cat firewalld.yaml 
---
- name: Stop and Close firewall service
  hosts: all
  tasks:
    - name: stop service firewalld,if it started
      service:
        name: firewalld
        state: stopped
      name: close service firewalld
      service:
        name: firewalld
        enabled: no
[root@master1 ansible]# ansible all -a "systemctl status firewalld"
192.168.109.132 | FAILED | rc=3 >>
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:firewalld(1)non-zero return code
192.168.109.131 | FAILED | rc=3 >>
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:firewalld(1)non-zero return code

1.1.6 关闭交换分区swap,提升性能

临时关闭

[root@master1 ansible]# ansible all -a "swapoff -a"
192.168.109.132 | CHANGED | rc=0 >>

192.168.109.131 | CHANGED | rc=0 >>

永久关闭:注释swap挂载,给swap这行开头加一下注释

vim /etc/fstab

#
# /etc/fstab
# Created by anaconda on Fri Mar  8 16:53:28 2024
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=faa3b0f1-fca1-457a-9a63-929a90dfc2ed /boot                   xfs     defaults        0 0
#/dev/mapper/centos-swap swap  

1.1.7 修改机器内核参数

修改master1内核参数:

[root@master1 ~]# modprobe br_netfilter
[root@master1 ~]# cat > /etc/sysctl.d/k8s.conf <<EOF
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> net.ipv4.ip_forward = 1
> EOF
[root@master1 ~]# sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

修改node1内核参数:

[root@node1 ~]# modprobe br_netfilter
[root@node1 ~]# cat > /etc/sysctl.d/k8s.conf <<EOF
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> net.ipv4.ip_forward = 1
> EOF
[root@node1 ~]# sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

1.1.8 配置阿里云的repo源

通过ansible批量配置repo源:

[root@master1 ~]# ansible all -a "yum install yum-utils -y"
[WARNING]: Consider using the yum module rather than running 'yum'.  If you
need to use command because yum is insufficient you can add 'warn: false' to
this command task or set 'command_warnings=False' in ansible.cfg to get rid of
this message.
192.168.109.132 | CHANGED | rc=0 >>
已加载插件:fastestmirror, langpacks
Loading mirror speeds from cached hostfile
 * base: mirrors.163.com
 * extras: mirrors.163.com
 * updates: mirrors.163.com
软件包 yum-utils-1.1.31-54.el7_8.noarch 已安装并且是最新版本
无须任何处理
192.168.109.131 | CHANGED | rc=0 >>
已加载插件:fastestmirror, langpacks
Loading mirror speeds from cached hostfile
 * base: mirrors.huaweicloud.com
 * epel: mirrors.qlu.edu.cn
 * extras: mirrors.huaweicloud.com
 * updates: mirrors.nju.edu.cn
软件包 yum-utils-1.1.31-54.el7_8.noarch 已安装并且是最新版本
无须任何处理
[root@master1 ~]# ansible all -a "yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo"
192.168.109.131 | CHANGED | rc=0 >>
已加载插件:fastestmirror, langpacks
adding repo from: http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
grabbing file http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo
192.168.109.132 | CHANGED | rc=0 >>
已加载插件:fastestmirror, langpacks
adding repo from: http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
grabbing file http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo
[root@master1 ~]# ll /etc/yum.repos.d/
总用量 52
-rw-r--r--. 1 root root 1664 10月 23 2020 CentOS-Base.repo
-rw-r--r--. 1 root root 1309 10月 23 2020 CentOS-CR.repo
-rw-r--r--. 1 root root  649 10月 23 2020 CentOS-Debuginfo.repo
-rw-r--r--. 1 root root  314 10月 23 2020 CentOS-fasttrack.repo
-rw-r--r--. 1 root root  630 10月 23 2020 CentOS-Media.repo
-rw-r--r--. 1 root root 1331 10月 23 2020 CentOS-Sources.repo
-rw-r--r--. 1 root root 8515 10月 23 2020 CentOS-Vault.repo
-rw-r--r--. 1 root root  616 10月 23 2020 CentOS-x86_64-kernel.repo
-rw-r--r--  1 root root 2081 3月  11 14:51 docker-ce.repo
-rw-r--r--. 1 root root  951 10月  3 2017 epel.repo
-rw-r--r--. 1 root root 1050 10月  3 2017 epel-testing.repo
[root@node1 ~]# ll /etc/yum.repos.d/
总用量 44
-rw-r--r--. 1 root root 1664 10月 23 2020 CentOS-Base.repo
-rw-r--r--. 1 root root 1309 10月 23 2020 CentOS-CR.repo
-rw-r--r--. 1 root root  649 10月 23 2020 CentOS-Debuginfo.repo
-rw-r--r--. 1 root root  314 10月 23 2020 CentOS-fasttrack.repo
-rw-r--r--. 1 root root  630 10月 23 2020 CentOS-Media.repo
-rw-r--r--. 1 root root 1331 10月 23 2020 CentOS-Sources.repo
-rw-r--r--. 1 root root 8515 10月 23 2020 CentOS-Vault.repo
-rw-r--r--. 1 root root  616 10月 23 2020 CentOS-x86_64-kernel.repo
-rw-r--r--  1 root root 2081 3月  11 14:51 docker-ce.repo

1.1.9 配置安装k8s组件需要的阿里云的repo源

配置master1 k8s的repo源:

[root@master1 ~]# cat > /etc/yum.repos.d/kubernetes.repo <<EOF 
> [kubernetes]
> name=Kubernetes
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
> enabled=1
> gpgcheck=0
> EOF

配置node1 k8s的repo源:

[root@node1 ~]# cat > /etc/yum.repos.d/kubernetes.repo <<EOF
> [kubernetes]
> name=Kubernetes
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
> enabled=1
> gpgcheck=0
> EOF

1.1.10 配置时间同步

通过ansible配置各主机时间同步:

[root@master1 ansible]# cat ntp.yaml 
---
- name: install ntpd and set cron
  hosts: all
  tasks:
    - name: yum install ntp
      yum:
        name: ntp
        state: present

    - name: set cron ntp sync
      cron:
        name: ntp sync
        minute: "1"
        job: "/usr/sbin/ntpdate   cn.pool.ntp.org"

[root@master1 ansible]# ansible-playbook ntp.yaml 

PLAY [install ntpd and set cron] ************************************************************

TASK [Gathering Facts] **********************************************************************
ok: [192.168.109.131]
ok: [192.168.109.132]

TASK [yum install ntp] **********************************************************************
changed: [192.168.109.132]
changed: [192.168.109.131]

TASK [set cron ntp sync] ********************************************************************
changed: [192.168.109.132]
changed: [192.168.109.131]

PLAY RECAP **********************************************************************************
192.168.109.131            : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
192.168.109.132            : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

[root@master1 ansible]# crontab -l
#Ansible: ntp sync
1 * * * * /usr/sbin/ntpdate   cn.pool.ntp.org

[root@node1 ~]# crontab -l
#Ansible: ntp sync
1 * * * * /usr/sbin/ntpdate   cn.pool.ntp.org

1.1.11 安装基础软件包

[root@master1 ~]# yum install -y device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel python-devel epel-release openssh-server socat ipvsadm conntrack telnet ipvsadm
[root@node1 ~]# yum install -y device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel python-devel epel-release openssh-server socat ipvsadm conntrack telnet ipvsadm

1.1.12 安装containerd服务

1.1.12.1 安装containerd
[root@master1 ~]# yum install -y containerd.io-1.6.6
[root@node1 ~]# yum install -y containerd.io-1.6.6
1.1.12.2 修改containerd配置文件,并启动containerd

master1:

[root@master1 ~]# mkdir -p /etc/containerd
[root@master1 ~]# containerd config default > /etc/containerd/config.toml
[root@master1 ~]# vim /etc/containerd/config.toml
[root@master1 ~]# grep SystemdCgroup /etc/containerd/config.toml              
            SystemdCgroup = true
[root@master1 ~]# grep sandbox_image /etc/containerd/config.toml
    sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.7"
[root@master1 ~]# systemctl enable containerd  --now
Created symlink from /etc/systemd/system/multi-user.target.wants/containerd.service to /usr/lib/systemd/system/containerd.service.
[root@master1 ~]# ps -ef|grep containerd
root       5149      1  1 17:14 ?        00:00:00 /usr/bin/containerd
root       5164   1761  0 17:14 pts/0    00:00:00 grep --color=auto containerd

node1:

[root@node1 ~]# mkdir -p /etc/containerd
[root@node1 ~]# containerd config default > /etc/containerd/config.toml
[root@node1 ~]# vim /etc/containerd/config.toml
[root@node1 ~]# systemctl enable containerd  --now
Created symlink from /etc/systemd/system/multi-user.target.wants/containerd.service to /usr/lib/systemd/system/containerd.service.

1.1.13 修改/etc/crictl.yaml文件

master1:
修改/etc/crictl.yaml文件并重启containerd

[root@master1 ~]# cat > /etc/crictl.yaml <<EOF
> runtime-endpoint: unix:///run/containerd/containerd.sock
> image-endpoint: unix:///run/containerd/containerd.sock
> timeout: 10
> debug: false
> EOF
[root@master1 ~]# systemctl restart containerd 

node1:
修改/etc/crictl.yaml文件并重启containerd

[root@node1 ~]# cat > /etc/crictl.yaml <<EOF
> runtime-endpoint: unix:///run/containerd/containerd.sock
> image-endpoint: unix:///run/containerd/containerd.sock
> timeout: 10
> debug: false
> EOF
[root@node1 ~]# systemctl restart containerd

1.1.14 通过ansible安装docker-ce

[root@master1 ~]# ansible all -a "yum -y install docker-ce"
[WARNING]: Consider using the yum module rather than running 'yum'.  If you need to use
command because yum is insufficient you can add 'warn: false' to this command task or set
'command_warnings=False' in ansible.cfg to get rid of this message.
192.168.109.132 | CHANGED | rc=0 >>
已加载插件:fastestmirror, langpacks
Loading mirror speeds from cached hostfile
 * base: mirrors.163.com
 * epel: mirror.01link.hk
 * extras: mirrors.163.com
 * updates: mirrors.163.com
正在解决依赖关系
--> 正在检查事务
---> 软件包 docker-ce.x86_64.3.25.0.4-1.el7 将被 安装
--> 正在处理依赖关系 containerd.io >= 1.6.24,它被软件包 3:docker-ce-25.0.4-1.el7.x86_64 需要
--> 正在处理依赖关系 docker-ce-cli,它被软件包 3:docker-ce-25.0.4-1.el7.x86_64 需要
--> 正在处理依赖关系 docker-ce-rootless-extras,它被软件包 3:docker-ce-25.0.4-1.el7.x86_64 需要
--> 正在检查事务
---> 软件包 containerd.io.x86_64.0.1.6.6-3.1.el7 将被 升级
---> 软件包 containerd.io.x86_64.0.1.6.28-3.1.el7 将被 更新
---> 软件包 docker-ce-cli.x86_64.1.25.0.4-1.el7 将被 安装
--> 正在处理依赖关系 docker-buildx-plugin,它被软件包 1:docker-ce-cli-25.0.4-1.el7.x86_64 需要
--> 正在处理依赖关系 docker-compose-plugin,它被软件包 1:docker-ce-cli-25.0.4-1.el7.x86_64 需要
---> 软件包 docker-ce-rootless-extras.x86_64.0.25.0.4-1.el7 将被 安装
--> 正在处理依赖关系 fuse-overlayfs >= 0.7,它被软件包 docker-ce-rootless-extras-25.0.4-1.el7.x86_64 需要
--> 正在处理依赖关系 slirp4netns >= 0.4,它被软件包 docker-ce-rootless-extras-25.0.4-1.el7.x86_64 需要
--> 正在检查事务
---> 软件包 docker-buildx-plugin.x86_64.0.0.13.0-1.el7 将被 安装
---> 软件包 docker-compose-plugin.x86_64.0.2.24.7-1.el7 将被 安装
---> 软件包 fuse-overlayfs.x86_64.0.0.7.2-6.el7_8 将被 安装
--> 正在处理依赖关系 libfuse3.so.3(FUSE_3.2)(64bit),它被软件包 fuse-overlayfs-0.7.2-6.el7_8.x86_64 需要
--> 正在处理依赖关系 libfuse3.so.3(FUSE_3.0)(64bit),它被软件包 fuse-overlayfs-0.7.2-6.el7_8.x86_64 需要
--> 正在处理依赖关系 libfuse3.so.3()(64bit),它被软件包 fuse-overlayfs-0.7.2-6.el7_8.x86_64 需要
---> 软件包 slirp4netns.x86_64.0.0.4.3-4.el7_8 将被 安装
--> 正在检查事务
---> 软件包 fuse3-libs.x86_64.0.3.6.1-4.el7 将被 安装
--> 解决依赖关系完成

依赖关系解决

================================================================================
 Package                     架构     版本             源                  大小
================================================================================
正在安装:
 docker-ce                   x86_64   3:25.0.4-1.el7   docker-ce-stable    26 M
为依赖而安装:
 docker-buildx-plugin        x86_64   0.13.0-1.el7     docker-ce-stable    14 M
 docker-ce-cli               x86_64   1:25.0.4-1.el7   docker-ce-stable    14 M
 docker-ce-rootless-extras   x86_64   25.0.4-1.el7     docker-ce-stable   9.4 M
 docker-compose-plugin       x86_64   2.24.7-1.el7     docker-ce-stable    13 M
 fuse-overlayfs              x86_64   0.7.2-6.el7_8    extras              54 k
 fuse3-libs                  x86_64   3.6.1-4.el7      extras              82 k
 slirp4netns                 x86_64   0.4.3-4.el7_8    extras              81 k
为依赖而更新:
 containerd.io               x86_64   1.6.28-3.1.el7   docker-ce-stable    35 M

事务概要
================================================================================
安装  1 软件包 (+7 依赖软件包)
升级           ( 1 依赖软件包)

总下载量:111 M
Downloading packages:
No Presto metadata available for docker-ce-stable
--------------------------------------------------------------------------------
总计                                               2.2 MB/s | 111 MB  00:51     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  正在安装    : slirp4netns-0.4.3-4.el7_8.x86_64                           1/10 
  正在安装    : docker-buildx-plugin-0.13.0-1.el7.x86_64                   2/10 
  正在更新    : containerd.io-1.6.28-3.1.el7.x86_64                        3/10 
  正在安装    : fuse3-libs-3.6.1-4.el7.x86_64                              4/10 
  正在安装    : fuse-overlayfs-0.7.2-6.el7_8.x86_64                        5/10 
  正在安装    : docker-compose-plugin-2.24.7-1.el7.x86_64                  6/10 
  正在安装    : 1:docker-ce-cli-25.0.4-1.el7.x86_64                        7/10 
  正在安装    : docker-ce-rootless-extras-25.0.4-1.el7.x86_64              8/10 
  正在安装    : 3:docker-ce-25.0.4-1.el7.x86_64                            9/10 
  清理        : containerd.io-1.6.6-3.1.el7.x86_64                        10/10 
  验证中      : docker-compose-plugin-2.24.7-1.el7.x86_64                  1/10 
  验证中      : fuse3-libs-3.6.1-4.el7.x86_64                              2/10 
  验证中      : containerd.io-1.6.28-3.1.el7.x86_64                        3/10 
  验证中      : fuse-overlayfs-0.7.2-6.el7_8.x86_64                        4/10 
  验证中      : docker-buildx-plugin-0.13.0-1.el7.x86_64                   5/10 
  验证中      : slirp4netns-0.4.3-4.el7_8.x86_64                           6/10 
  验证中      : 1:docker-ce-cli-25.0.4-1.el7.x86_64                        7/10 
  验证中      : 3:docker-ce-25.0.4-1.el7.x86_64                            8/10 
  验证中      : docker-ce-rootless-extras-25.0.4-1.el7.x86_64              9/10 
  验证中      : containerd.io-1.6.6-3.1.el7.x86_64                        10/10 

已安装:
  docker-ce.x86_64 3:25.0.4-1.el7                                               

作为依赖被安装:
  docker-buildx-plugin.x86_64 0:0.13.0-1.el7                                    
  docker-ce-cli.x86_64 1:25.0.4-1.el7                                           
  docker-ce-rootless-extras.x86_64 0:25.0.4-1.el7                               
  docker-compose-plugin.x86_64 0:2.24.7-1.el7                                   
  fuse-overlayfs.x86_64 0:0.7.2-6.el7_8                                         
  fuse3-libs.x86_64 0:3.6.1-4.el7                                               
  slirp4netns.x86_64 0:0.4.3-4.el7_8                                            

作为依赖被升级:
  containerd.io.x86_64 0:1.6.28-3.1.el7                                         

完毕!
192.168.109.131 | CHANGED | rc=0 >>
已加载插件:fastestmirror, langpacks
Loading mirror speeds from cached hostfile
 * base: mirrors.huaweicloud.com
 * epel: mirrors.qlu.edu.cn
 * extras: mirrors.huaweicloud.com
 * updates: mirrors.nju.edu.cn
正在解决依赖关系
--> 正在检查事务
---> 软件包 docker-ce.x86_64.3.25.0.4-1.el7 将被 安装
--> 正在处理依赖关系 containerd.io >= 1.6.24,它被软件包 3:docker-ce-25.0.4-1.el7.x86_64 需要
--> 正在处理依赖关系 docker-ce-cli,它被软件包 3:docker-ce-25.0.4-1.el7.x86_64 需要
--> 正在处理依赖关系 docker-ce-rootless-extras,它被软件包 3:docker-ce-25.0.4-1.el7.x86_64 需要
--> 正在检查事务
---> 软件包 containerd.io.x86_64.0.1.6.6-3.1.el7 将被 升级
---> 软件包 containerd.io.x86_64.0.1.6.28-3.1.el7 将被 更新
---> 软件包 docker-ce-cli.x86_64.1.25.0.4-1.el7 将被 安装
--> 正在处理依赖关系 docker-buildx-plugin,它被软件包 1:docker-ce-cli-25.0.4-1.el7.x86_64 需要
--> 正在处理依赖关系 docker-compose-plugin,它被软件包 1:docker-ce-cli-25.0.4-1.el7.x86_64 需要
---> 软件包 docker-ce-rootless-extras.x86_64.0.25.0.4-1.el7 将被 安装
--> 正在处理依赖关系 fuse-overlayfs >= 0.7,它被软件包 docker-ce-rootless-extras-25.0.4-1.el7.x86_64 需要
--> 正在处理依赖关系 slirp4netns >= 0.4,它被软件包 docker-ce-rootless-extras-25.0.4-1.el7.x86_64 需要
--> 正在检查事务
---> 软件包 docker-buildx-plugin.x86_64.0.0.13.0-1.el7 将被 安装
---> 软件包 docker-compose-plugin.x86_64.0.2.24.7-1.el7 将被 安装
---> 软件包 fuse-overlayfs.x86_64.0.0.7.2-6.el7_8 将被 安装
--> 正在处理依赖关系 libfuse3.so.3(FUSE_3.2)(64bit),它被软件包 fuse-overlayfs-0.7.2-6.el7_8.x86_64 需要
--> 正在处理依赖关系 libfuse3.so.3(FUSE_3.0)(64bit),它被软件包 fuse-overlayfs-0.7.2-6.el7_8.x86_64 需要
--> 正在处理依赖关系 libfuse3.so.3()(64bit),它被软件包 fuse-overlayfs-0.7.2-6.el7_8.x86_64 需要
---> 软件包 slirp4netns.x86_64.0.0.4.3-4.el7_8 将被 安装
--> 正在检查事务
---> 软件包 fuse3-libs.x86_64.0.3.6.1-4.el7 将被 安装
--> 解决依赖关系完成

依赖关系解决

================================================================================
 Package                     架构     版本             源                  大小
================================================================================
正在安装:
 docker-ce                   x86_64   3:25.0.4-1.el7   docker-ce-stable    26 M
为依赖而安装:
 docker-buildx-plugin        x86_64   0.13.0-1.el7     docker-ce-stable    14 M
 docker-ce-cli               x86_64   1:25.0.4-1.el7   docker-ce-stable    14 M
 docker-ce-rootless-extras   x86_64   25.0.4-1.el7     docker-ce-stable   9.4 M
 docker-compose-plugin       x86_64   2.24.7-1.el7     docker-ce-stable    13 M
 fuse-overlayfs              x86_64   0.7.2-6.el7_8    extras              54 k
 fuse3-libs                  x86_64   3.6.1-4.el7      extras              82 k
 slirp4netns                 x86_64   0.4.3-4.el7_8    extras              81 k
为依赖而更新:
 containerd.io               x86_64   1.6.28-3.1.el7   docker-ce-stable    35 M

事务概要
================================================================================
安装  1 软件包 (+7 依赖软件包)
升级           ( 1 依赖软件包)

总下载量:111 M
Downloading packages:
No Presto metadata available for docker-ce-stable
--------------------------------------------------------------------------------
总计                                               2.1 MB/s | 111 MB  00:52     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  正在安装    : slirp4netns-0.4.3-4.el7_8.x86_64                           1/10 
  正在安装    : docker-buildx-plugin-0.13.0-1.el7.x86_64                   2/10 
  正在更新    : containerd.io-1.6.28-3.1.el7.x86_64                        3/10 
  正在安装    : fuse3-libs-3.6.1-4.el7.x86_64                              4/10 
  正在安装    : fuse-overlayfs-0.7.2-6.el7_8.x86_64                        5/10 
  正在安装    : docker-compose-plugin-2.24.7-1.el7.x86_64                  6/10 
  正在安装    : 1:docker-ce-cli-25.0.4-1.el7.x86_64                        7/10 
  正在安装    : docker-ce-rootless-extras-25.0.4-1.el7.x86_64              8/10 
  正在安装    : 3:docker-ce-25.0.4-1.el7.x86_64                            9/10 
  清理        : containerd.io-1.6.6-3.1.el7.x86_64                        10/10 
  验证中      : docker-compose-plugin-2.24.7-1.el7.x86_64                  1/10 
  验证中      : fuse3-libs-3.6.1-4.el7.x86_64                              2/10 
  验证中      : containerd.io-1.6.28-3.1.el7.x86_64                        3/10 
  验证中      : fuse-overlayfs-0.7.2-6.el7_8.x86_64                        4/10 
  验证中      : docker-buildx-plugin-0.13.0-1.el7.x86_64                   5/10 
  验证中      : slirp4netns-0.4.3-4.el7_8.x86_64                           6/10 
  验证中      : 1:docker-ce-cli-25.0.4-1.el7.x86_64                        7/10 
  验证中      : 3:docker-ce-25.0.4-1.el7.x86_64                            8/10 
  验证中      : docker-ce-rootless-extras-25.0.4-1.el7.x86_64              9/10 
  验证中      : containerd.io-1.6.6-3.1.el7.x86_64                        10/10 

已安装:
  docker-ce.x86_64 3:25.0.4-1.el7                                               

作为依赖被安装:
  docker-buildx-plugin.x86_64 0:0.13.0-1.el7                                    
  docker-ce-cli.x86_64 1:25.0.4-1.el7                                           
  docker-ce-rootless-extras.x86_64 0:25.0.4-1.el7                               
  docker-compose-plugin.x86_64 0:2.24.7-1.el7                                   
  fuse-overlayfs.x86_64 0:0.7.2-6.el7_8                                         
  fuse3-libs.x86_64 0:3.6.1-4.el7                                               
  slirp4netns.x86_64 0:0.4.3-4.el7_8                                            

作为依赖被升级:
  containerd.io.x86_64 0:1.6.28-3.1.el7                                         

完毕!

[root@master1 ~]# ansible all -a "docker --version"
192.168.109.131 | CHANGED | rc=0 >>
Docker version 25.0.4, build 1a576c5
192.168.109.132 | CHANGED | rc=0 >>
Docker version 25.0.4, build 1a576c5

启动docker:

[root@node1 ~]# systemctl enable docker --now
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

1.1.15 配置containerd镜像加速器

k8s所有节点均按照以下配置:
编辑vim /etc/containerd/config.toml文件

找到config_path = “”,修改成如下目录:

config_path = “/etc/containerd/certs.d”

保存退出;

mkdir -p /etc/containerd/certs.d/docker.io/

vim /etc/containerd/certs.d/docker.io/hosts.toml

写入如下内容:

[host.“https://vh3bm52y.mirror.aliyuncs.com”,host.“https://registry.docker-cn.com”]

capabilities = [“pull”]

重启containerd

[root@master1 ~]# vim /etc/containerd/config.toml
[root@master1 ~]# grep config_path /etc/containerd/config.toml
      config_path = "/etc/containerd/certs.d"
[root@master1 ~]# mkdir -p /etc/containerd/certs.d/docker.io/
[root@master1 ~]# vim /etc/containerd/certs.d/docker.io/hosts.toml
[root@master1 ~]# cat /etc/containerd/certs.d/docker.io/hosts.toml
[host."https://vh3bm52y.mirror.aliyuncs.com",host."https://registry.docker-cn.com"]
capabilities = ["pull"]
[root@master1 ~]# systemctl restart containerd

1.1.16 配置docker镜像加速器

k8s所有节点均按照以下配置:
vim /etc/docker/daemon.json

写入如下内容:

{
“registry-mirrors”:[“https://vh3bm52y.mirror.aliyuncs.com”,“https://registry.docker-cn.com”,“https://docker.mirrors.ustc.edu.cn”,“https://dockerhub.azk8s.cn”,“http://hub-mirror.c.163.com”]**
}

重启docker:

systemctl restart docker

1.1.17 安装初始化k8s需要的软件包

[root@master1 ~]# yum install -y kubelet-1.25.0 kubeadm-1.25.0 kubectl-1.25.0
[root@master1 ~]#  systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

[root@node1 ~]# yum install -y kubelet-1.25.0 kubeadm-1.25.0 kubectl-1.25.0
[root@node1 ~]#  systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

1.1.18 kubeadm初始化k8s集群

1.1.18.1 设置容器运行时
[root@master1 ~]# crictl config runtime-endpoint /run/containerd/containerd.sock
[root@mnode1 ~]# crictl config runtime-endpoint /run/containerd/containerd.sock
1.1.18.2 使用kubeadm初始化k8s集群

根据我们自己的需求修改配置

[root@master1 ~]# kubeadm config print init-defaults > kubeadm.yaml
[root@master1 ~]# vim kubeadm.yaml 
[root@master1 ~]# cat kubeadm.yaml 
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.109.131
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: master1
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.25.0
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/12
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd

advertiseAddress: 192.168.109.131 #修改成自己主机IP

criSocket: unix:///run/containerd/containerd.sock #指定containerd容器运行时

imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers #指定阿里云镜像仓库地址

podSubnet: 10.244.0.0/16 #指定pod网段,需要新增加这个

#在文件最后,插入以下内容,(复制时,要带着—):

---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
1.1.18.3 上传导入k8s_1.25镜像包
[root@master1 ~]# ctr -n=k8s.io images import k8s_1.25.0.tar.gz
unpacking registry.aliyuncs.com/google_containers/pause:3.7 (sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c)...done
unpacking registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.9.3 (sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a)...done
unpacking registry.cn-hangzhou.aliyuncs.com/google_containers/etcd-amd64:3.4.7-0 (sha256:a5250021a52e8d2300b6c1c5111a12a3b2f70c463eac9e628e9589578c25cd7a)...done
unpacking registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.4-0 (sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1)...done
unpacking registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.25.0 (sha256:f6902791fb9aa6e283ed7d1d743417b3c425eec73151517813bef1539a66aefa)...done
unpacking registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.25.0 (sha256:66ce7d460e53f942bb4729f656d66fe475ec3d41728de986b6d790eee6d8205d)...done
unpacking registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.25.0 (sha256:1b1f3456bb19866aa1655c607514b85cd2b6efdfea4d93ea55e79475ff2765f9)...done
unpacking registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.25.0 (sha256:9330c53feca7b51b25e427fa96afd5d1460b3233e9fa92e20c895c067da56ac1)...done
unpacking registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.8 (sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d)...done
[root@node1 ~]# ctr -n=k8s.io images import k8s_1.25.0.tar.gz
unpacking registry.aliyuncs.com/google_containers/pause:3.7 (sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c)...done
unpacking registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.9.3 (sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a)...done
unpacking registry.cn-hangzhou.aliyuncs.com/google_containers/etcd-amd64:3.4.7-0 (sha256:a5250021a52e8d2300b6c1c5111a12a3b2f70c463eac9e628e9589578c25cd7a)...done
unpacking registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.4-0 (sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1)...done
unpacking registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.25.0 (sha256:f6902791fb9aa6e283ed7d1d743417b3c425eec73151517813bef1539a66aefa)...done
unpacking registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.25.0 (sha256:66ce7d460e53f942bb4729f656d66fe475ec3d41728de986b6d790eee6d8205d)...done
unpacking registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.25.0 (sha256:1b1f3456bb19866aa1655c607514b85cd2b6efdfea4d93ea55e79475ff2765f9)...done
unpacking registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.25.0 (sha256:9330c53feca7b51b25e427fa96afd5d1460b3233e9fa92e20c895c067da56ac1)...done
unpacking registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.8 (sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d)...done
1.1.18.4 在master安装部署k8s
[root@master1 ~]#  kubeadm init --config=kubeadm.yaml --ignore-preflight-errors=SystemVerification
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.109.131:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:4b8c73ffd02edf9a635101f5cb618eead92aa52b33f0717deef46eef0eda03af

配置kubectl的配置文件config,相当于对kubectl进行授权,这样kubectl命令可以使用这个证书对k8s集群进行管理

[root@master1 ~]# mkdir -p $HOME/.kube
[root@master1 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master1 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@master1 ~]# kubectl get nodes
NAME      STATUS     ROLES           AGE     VERSION
master1   NotReady   control-plane   3m32s   v1.25.0

1.1.19 添加一个工作节点

添加工作节点命令:kubeadm token create --print-join-command

[root@master1 ~]# kubeadm token create --print-join-command
kubeadm join 192.168.109.131:6443 --token aqxszy.vh1qvs2ef3alyctp --discovery-token-ca-cert-hash sha256:4b8c73ffd02edf9a635101f5cb618eead92aa52b33f0717deef46eef0eda03af

把node1加入k8s集群:

命令:kubeadm join 192.168.109.131:6443 --token aqxszy.vh1qvs2ef3alyctp --discovery-token-ca-cert-hash sha256:4b8c73ffd02edf9a635101f5cb618eead92aa52b33f0717deef46eef0eda03af --ignore-preflight-errors=SystemVerification

[root@node1 ~]# kubeadm join 192.168.109.131:6443 --token aqxszy.vh1qvs2ef3alyctp --discovery-token-ca-cert-hash sha256:4b8c73ffd02edf9a635101f5cb618eead92aa52b33f0717deef46eef0eda03af --ignore-preflight-errors=SystemVerification
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

在master1上查看集群节点状况:

[root@master1 ~]# kubectl get nodes
NAME      STATUS     ROLES           AGE     VERSION
master1   NotReady   control-plane   8m25s   v1.25.0
node1     NotReady   <none>          54s     v1.25.0

对node1打个标签,显示work

kubectl label nodes node1 node-role.kubernetes.io/work=work

[root@master1 ~]# kubectl label nodes node1 node-role.kubernetes.io/work=work        
node/node1 labeled
[root@master1 ~]# kubectl get nodes
NAME      STATUS     ROLES           AGE    VERSION
master1   NotReady   control-plane   10m    v1.25.0
node1     NotReady   work            3m1s   v1.25.0

1.1.20 安装kubernetes网络组件-Calico

把安装calico需要的镜像calico.tar.gz传到xmaster1和node1节点,手动解压:
ctr -n=k8s.io images import calico.tar.gz

[root@master1 ~]# ctr -n=k8s.io images import calico.tar.gz
unpacking docker.io/calico/cni:v3.18.0 (sha256:3f4da42b983e5cdcd6ca8f5f18ab9228988908f0d0fc7b4ccdfdea133badac4b)...done
unpacking docker.io/calico/node:v3.18.0 (sha256:ea61434ae750a9bc6b7e998f6fc4d8eeab43f53ba2de89fc5bbf1459a7eee667)...done
unpacking docker.io/calico/pod2daemon-flexvol:v3.18.0 (sha256:d18a19134ccf88a2f97f220400953934655b5734eb846d3ac1a72e8e32f0df32)...done
unpacking docker.io/calico/kube-controllers:v3.18.0 (sha256:c9c9ea8416dc0d09c5df883a3a79bad028516beb5a04d380e2217f41e9aff1f0)...done
[root@node1 ~]# ctr -n=k8s.io images import calico.tar.gz
unpacking docker.io/calico/cni:v3.18.0 (sha256:3f4da42b983e5cdcd6ca8f5f18ab9228988908f0d0fc7b4ccdfdea133badac4b)...done
unpacking docker.io/calico/node:v3.18.0 (sha256:ea61434ae750a9bc6b7e998f6fc4d8eeab43f53ba2de89fc5bbf1459a7eee667)...done
unpacking docker.io/calico/pod2daemon-flexvol:v3.18.0 (sha256:d18a19134ccf88a2f97f220400953934655b5734eb846d3ac1a72e8e32f0df32)...done
unpacking docker.io/calico/kube-controllers:v3.18.0 (sha256:c9c9ea8416dc0d09c5df883a3a79bad028516beb5a04d380e2217f41e9aff1f0)...done

上传calico.yaml到master1上,使用yaml文件安装calico 网络插件 。
kubectl apply -f calico.yaml

[root@master1 ~]# kubectl apply -f calico.yaml 
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
poddisruptionbudget.policy/calico-kube-controllers created

[root@master1 ~]# kubectl get nodes
NAME      STATUS   ROLES           AGE   VERSION
master1   Ready    control-plane   25m   v1.25.0
node1     Ready    work            18m   v1.25.0

1.1.21 测试在k8s创建pod是否可以正常访问网络

把busybox-1-28.tar.gz上传到master和node1节点,手动解压
ctr -n k8s.io images import busybox-1-28.tar.gz

[root@master1 ~]# ctr -n k8s.io images import busybox-1-28.tar.gz
unpacking docker.io/library/busybox:1.28 (sha256:585093da3a716161ec2b2595011051a90d2f089bc2a25b4a34a18e2cf542527c)...done
[root@node1 ~]# ctr -n k8s.io images import busybox-1-28.tar.gz
unpacking docker.io/library/busybox:1.28 (sha256:585093da3a716161ec2b2595011051a90d2f089bc2a25b4a34a18e2cf542527c)...done

kubectl run busybox --image docker.io/library/busybox:1.28 --image-pull-policy=IfNotPresent --restart=Never --rm -it busybox – sh

[root@master1 ~]# kubectl run busybox --image docker.io/library/busybox:1.28  --image-pull-policy=IfNotPresent --restart=Never --rm -it busybox -- sh
If you don't see a command prompt, try pressing enter.
/ # ping www.baidu.com
PING www.baidu.com (157.148.69.80): 56 data bytes
64 bytes from 157.148.69.80: seq=0 ttl=127 time=76.515 ms
64 bytes from 157.148.69.80: seq=1 ttl=127 time=18.015 ms
64 bytes from 157.148.69.80: seq=2 ttl=127 time=17.512 ms
64 bytes from 157.148.69.80: seq=3 ttl=127 time=17.962 ms
^C
--- www.baidu.com ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 17.512/32.501/76.515 ms

通过上面可以看到能访问网络,说明calico网络插件已经被正常安装了

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐