K8S 普通Node节点之Docker环境搭建

系统环境

NodeIPCentOSkernelcpumemory
node1192.168.159.4CentOS Linux release 7.4.1708 (Core)3.10.0-693.el7.x86_64Intel® Core™ i5-7500 CPU @ 3.40GHz * 12G
node2192.168.159.5CentOS Linux release 7.4.1708 (Core)3.10.0-693.el7.x86_64Intel® Core™ i5-7500 CPU @ 3.40GHz * 12G

Node 软件环境

NodeIPdockerflannel
node1192.168.159.419.03.10.11.0
node2192.168.159.519.03.10.11.0

Docker 安装

官方文档

Docker配置文件详解

docker 下载

官方下载地址

wget https://download.docker.com/linux/static/stable/x86_64/docker-19.03.1.tgz
tar -zxvf  docker-19.03.1.tgz
cp -f docker/* /usr/local/bin/

docker 配置文件

cat > /opt/k8s/node/etc/docker.conf << "EOF"
DOCKER_NETWORK_OPTIONS="-H unix:///var/run/docker.sock \
-H 0.0.0.0:2375"
"EOF"

docker 服务文件

cat > /usr/lib/systemd/system/docker.service << EOF
[Unit]
Description=Docker Engine Service
Documentation=https://docs.docker.com
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin"
EnvironmentFile=-/opt/k8s/node/etc/docker.conf
ExecStart=/usr/local/bin/dockerd $DOCKER_NETWORK_OPTIONS 
ExecReload=/bin/kill -s HUP $MAINPID
Restart=on-failure
RestartSec=5
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Delegate=yes
KillMode=process

[Install]
WantedBy=multi-user.target
EOF

docker 服务启动

启动
systemctl daemon-reload && systemctl start docker
查看

查看docker版本信息

[root@node1 k8s]# docker version
Client: Docker Engine - Community
 Version:           19.03.1
 API version:       1.40
 Go version:        go1.12.5
 Git commit:        74b1e89e8a
 Built:             Thu Jul 25 21:17:37 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.1
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.5
  Git commit:       74b1e89e8a
  Built:            Thu Jul 25 21:27:55 2019
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          v1.2.6
  GitCommit:        894b81a4b802e4eb2a91d1ce216b8817763c29fb
 runc:
  Version:          1.0.0-rc8
  GitCommit:        425e105d5a03fabd737a126ad93d62a9eeede87f
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

查看本地镜像

[root@node1 docker]# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE

查看容器

[root@node1 docker]# docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
验证
  • 从仓库搜索镜像
        [root@node1 pki]# docker search centos
       NAME                               DESCRIPTION                                     STARS               OFFICIAL            AUTOMATED
       centos                             The official build of CentOS.                   5524                [OK]                
       ansible/centos7-ansible            Ansible on Centos7                              122                                     [OK]
       jdeathe/centos-ssh                 CentOS-6 6.10 x86_64 / CentOS-7 7.6.1810 x86…   111                                     [OK]
       consol/centos-xfce-vnc             Centos container with "headless" VNC session…   99                                      [OK]
       centos/mysql-57-centos7            MySQL 5.7 SQL database server                   62                                      
       imagine10255/centos6-lnmp-php56    centos6-lnmp-php56                              57                                      [OK]
       tutum/centos                       Simple CentOS docker image with SSH access      44                                      
       centos/postgresql-96-centos7       PostgreSQL is an advanced Object-Relational …   39                                      
       kinogmt/centos-ssh                 CentOS with SSH                                 29                                      [OK]
       pivotaldata/centos-gpdb-dev        CentOS image for GPDB development. Tag names…   10                                      
       guyton/centos6                     From official centos6 container with full up…   9                                       [OK]
       drecom/centos-ruby                 centos ruby                                     6                                       [OK]
       pivotaldata/centos                 Base centos, freshened up a little with a Do…   3                                       
       mamohr/centos-java                 Oracle Java 8 Docker image based on Centos 7    3                                       [OK]
       darksheer/centos                   Base Centos Image -- Updated hourly             3                                       [OK]
       pivotaldata/centos-gcc-toolchain   CentOS with a toolchain, but unaffiliated wi…   2                                       
       pivotaldata/centos-mingw           Using the mingw toolchain to cross-compile t…   2                                       
       ovirtguestagent/centos7-atomic     The oVirt Guest Agent for Centos 7 Atomic Ho…   2                                       
       miko2u/centos6                     CentOS6 日本語環境                                   2                                       [OK]
       mcnaughton/centos-base             centos base image                               1                                       [OK]
       indigo/centos-maven                Vanilla CentOS 7 with Oracle Java Developmen…   1                                       [OK]
       blacklabelops/centos               CentOS Base Image! Built and Updates Daily!     1                                       [OK]
       smartentry/centos                  centos with smartentry                          0                                       [OK]
       pivotaldata/centos7-dev            CentosOS 7 image for GPDB development           0                                       
       pivotaldata/centos6.8-dev          CentosOS 6.8 image for GPDB development         0 
    
  • 拉取centos镜像
    [root@node1 pki]# docker pull centos
    Using default tag: latest
    latest: Pulling from library/centos
    d8d02d457314: Pull complete 
    Digest: sha256:307835c385f656ec2e2fec602cf093224173c51119bbebd602c53c3653a3d6eb
    Status: Downloaded newer image for centos:latest
    docker.io/library/centos:latest
    
    [root@node1 pki]# docker images
    REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
    centos              latest              67fa590cfc1c        2 days ago          202MB
    
  • 创建test容器

    [root@node1 pki]# docker run -i -t --name test centos /bin/bash
    WARNING: IPv4 forwarding is disabled. Networking will not work.
    docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \"write /proc/self/attr/keycreate: permission denied\"": unknown.
    
    [root@node1 pki]# docker ps -a
    CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
    5ba3f8995eff        centos              "/bin/bash"         7 minutes ago       Created
    
  • 问题解决1
    WARNING: IPv4 forwarding is disabled. Networking will not work.
    方案1、开启IPv4转发,允许容器访问外部网络

    sed -i '$a\net.ipv4.ip_forward=1' /usr/lib/sysctl.d/00-system.conf
    systemctl daemon-reload && systemctl restart network
    

    方案二、在配置DOCKER_NETWORK_OPTIONS中添加启动参数--ip-forward=true,则重启docker服务时就会自动设定系统的 ip_forward 参数为 1

  • 问题解决2

    docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \"write /proc/self/attr/keycreate: permission denied\"": unknown.
    

    查看SELINUX状态

    [root@node1 pki]# getenforce
    Enforcing
    

    临时关闭SELINUX

    setenforce 0
    

    查看SELINUX状态

    [root@node1 pki]# getenforce
    Permissive
    

    启动容器

    [root@node1 pki]# docker start test
    test
    
    [root@node1 pki]# docker ps
    CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
    5ba3f8995eff        centos              "/bin/bash"         14 minutes ago      Up 14 seconds
    

    永久关闭SELINUX

    sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
    reboot
    systemctl start docker
    docker start test
    

    查看SELINUX状态

    [root@node1 pki]# getenforce
    Disabled
    
Docker常用命令介绍
  • 进入test容器命令行

    docker attach test
    
  • 退出容器
    退出并关闭容器
    exit 或者 Ctrl+C && Ctrl + D
    退出命令行
    Ctrl + P && Ctrl +Q

  • 停止容器
    docker stop test

  • 删除test容器
    docker rm test

  • 保存镜像到本地
    docker save -o centos.tar centos:latest

  • 删除镜像
    docker rmi centos:latest

  • 载入本地镜像
    docker load --input centos.tar

Docker跨主机集群网络配置

Node节点网卡查看

查看node1网桥,新增docker0

[root@node1 ~]# ifconfig
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        inet6 fe80::42:c5ff:fed3:c41d  prefixlen 64  scopeid 0x20<link>
        ether 02:42:c5:d3:c4:1d  txqueuelen 0  (Ethernet)
        RX packets 2901  bytes 119827 (117.0 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 3839  bytes 15022256 (14.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.159.4  netmask 255.255.255.0  broadcast 192.168.159.255
        inet6 fe80::69a7:b207:afb9:9db4  prefixlen 64  scopeid 0x20<link>
        inet6 fe80::2b45:985c:3f4e:908b  prefixlen 64  scopeid 0x20<link>
        ether 00:50:56:34:3a:20  txqueuelen 1000  (Ethernet)
        RX packets 65971  bytes 19511878 (18.6 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 56734  bytes 3494791 (3.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 32  bytes 2592 (2.5 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 32  bytes 2592 (2.5 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth9d114c0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::80e1:39ff:fe8e:240e  prefixlen 64  scopeid 0x20<link>
        ether 82:e1:39:8e:24:0e  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 8  bytes 656 (656.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

查看node2网桥,新增docker0

[root@node2 ~]# ifconfig
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        inet6 fe80::42:1aff:fe5e:9ea4  prefixlen 64  scopeid 0x20<link>
        ether 02:42:1a:5e:9e:a4  txqueuelen 0  (Ethernet)
        RX packets 13  bytes 980 (980.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 21  bytes 2014 (1.9 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.159.5  netmask 255.255.255.0  broadcast 192.168.159.255
        inet6 fe80::69a7:b207:afb9:9db4  prefixlen 64  scopeid 0x20<link>
        ether 00:50:56:24:97:9a  txqueuelen 1000  (Ethernet)
        RX packets 150770  bytes 220073564 (209.8 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 8739  bytes 781644 (763.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 64  bytes 5568 (5.4 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 64  bytes 5568 (5.4 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth7fa5fd1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::6c8e:4dff:fed4:5761  prefixlen 64  scopeid 0x20<link>
        ether 6e:8e:4d:d4:57:61  txqueuelen 0  (Ethernet)
        RX packets 13  bytes 1162 (1.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 21  bytes 2014 (1.9 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

查看node1的docker网络

[root@node1 ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
899aadb08971        bridge              bridge              local
2b6bb260df1e        host                host                local
f227670e6a5b        none                null                local

bridge,类似VMWare的Nat模式,(容器eth0 --> veth* --> docker0 --> 主机ens33),从docker0子网中分配一个IP给容器,并设置docker0为容器默认网关;
host,类似VMWare的桥接模式,容器没有独立网络,使用宿主机的IP和端口;
none, 网络没有配置,适用于容器不需要访问网络或者网络独立的overlay模式。

查看node2的docker网络

[root@node2 ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
16f0b5194fdb        bridge              bridge              local
593efbd057ff        host                host                local
e9e9bbc0bef6        none                null                local
测试跨主机容器互访
在node1节点启动centos镜像容器node1
  • 启动
    docker run -i -t --name node1 centos /bin/bash

  • 查看容器hosts

    [root@0402a3fb29e6 /]# cat /etc/hosts
    127.0.0.1	localhost
    ::1	localhost ip6-localhost ip6-loopback
    fe00::0	ip6-localnet
    ff00::0	ip6-mcastprefix
    ff02::1	ip6-allnodes
    ff02::2	ip6-allrouters
    172.17.0.2	0402a3fb29e6
    
在node2节点启动centos镜像容器node2
  • 启动
    docker run -i -t --name node2 centos

  • 查看容器hosts

    [root@bc6194e6391e /]# cat /etc/hosts
    127.0.0.1	localhost
    ::1	localhost ip6-localhost ip6-loopback
    fe00::0	ip6-localnet
    ff00::0	ip6-mcastprefix
    ff02::1	ip6-allnodes
    ff02::2	ip6-allrouters
    172.17.0.2	bc6194e6391e
    
主机网络互访
  • 主机node1访问主机node2

    [root@node1 ~]# ping node2
    PING node2 (192.168.159.5) 56(84) bytes of data.
    64 bytes from node2 (192.168.159.5): icmp_seq=1 ttl=64 time=0.252 ms
    64 bytes from node2 (192.168.159.5): icmp_seq=2 ttl=64 time=0.463 ms
    64 bytes from node2 (192.168.159.5): icmp_seq=3 ttl=64 time=0.374 ms
    64 bytes from node2 (192.168.159.5): icmp_seq=4 ttl=64 time=0.242 ms
    64 bytes from node2 (192.168.159.5): icmp_seq=5 ttl=64 time=0.495 ms
    
  • 主机node2访问主机node1

    [root@node2 ~]# ping node1
    PING node1 (192.168.159.4) 56(84) bytes of data.
    64 bytes from node1 (192.168.159.4): icmp_seq=1 ttl=64 time=0.557 ms
    64 bytes from node1 (192.168.159.4): icmp_seq=2 ttl=64 time=0.441 ms
    64 bytes from node1 (192.168.159.4): icmp_seq=3 ttl=64 time=0.413 ms
    64 bytes from node1 (192.168.159.4): icmp_seq=4 ttl=64 time=0.284 ms
    64 bytes from node1 (192.168.159.4): icmp_seq=5 ttl=64 time=0.301 ms
    
容器访问主机
  • 容器node1访问主机node1

    [root@0402a3fb29e6 /]# ping 192.168.159.4
    PING 192.168.159.4 (192.168.159.4) 56(84) bytes of data.
    64 bytes from 192.168.159.4: icmp_seq=1 ttl=64 time=0.261 ms
    64 bytes from 192.168.159.4: icmp_seq=2 ttl=64 time=0.250 ms
    64 bytes from 192.168.159.4: icmp_seq=3 ttl=64 time=0.129 ms
    64 bytes from 192.168.159.4: icmp_seq=4 ttl=64 time=0.126 ms
    64 bytes from 192.168.159.4: icmp_seq=5 ttl=64 time=0.129 ms
    
  • 容器node1访问主机node2

    [root@0402a3fb29e6 /]# ping 192.168.159.5
    PING 192.168.159.5 (192.168.159.5) 56(84) bytes of data.
    64 bytes from 192.168.159.5: icmp_seq=1 ttl=63 time=1.28 ms
    64 bytes from 192.168.159.5: icmp_seq=2 ttl=63 time=0.310 ms
    64 bytes from 192.168.159.5: icmp_seq=3 ttl=63 time=0.724 ms
    64 bytes from 192.168.159.5: icmp_seq=4 ttl=63 time=0.523 ms
    64 bytes from 192.168.159.5: icmp_seq=5 ttl=63 time=0.564 ms
    
  • 容器node2访问主机node2

    [root@bc6194e6391e /]# ping 192.168.159.5
    PING 192.168.159.5 (192.168.159.5) 56(84) bytes of data.
    64 bytes from 192.168.159.5: icmp_seq=1 ttl=64 time=0.283 ms
    64 bytes from 192.168.159.5: icmp_seq=2 ttl=64 time=0.210 ms
    64 bytes from 192.168.159.5: icmp_seq=3 ttl=64 time=0.129 ms
    64 bytes from 192.168.159.5: icmp_seq=4 ttl=64 time=0.122 ms
    64 bytes from 192.168.159.5: icmp_seq=5 ttl=64 time=0.258 ms
    
  • 容器node2访问主机node1

    [root@bc6194e6391e /]# ping 192.168.159.4
    PING 192.168.159.4 (192.168.159.4) 56(84) bytes of data.
    64 bytes from 192.168.159.4: icmp_seq=1 ttl=63 time=0.571 ms
    64 bytes from 192.168.159.4: icmp_seq=2 ttl=63 time=0.356 ms
    64 bytes from 192.168.159.4: icmp_seq=3 ttl=63 time=0.609 ms
    64 bytes from 192.168.159.4: icmp_seq=4 ttl=63 time=0.607 ms
    64 bytes from 192.168.159.4: icmp_seq=5 ttl=63 time=0.417 ms
    
容器互访

ping -c 3 172.17.0.2

此时容器node1和容器node2在各自宿主机上没有添加对应的路由表,且两个容器IP地址相同,因此无法访问目标容器。
通过tcpdump抓包分析,数据包没有到达宿主机的docker0veth*ens33网桥。

  • tcpdump 命令
    tcpdump -i docker0 -n icmp
使用直接路由方式实现跨主机容器互访
修改node1主机的Docker0的网络地址
  • 修改/etc/docker/daemon.json

    docker守护进出默认配置文件,通过参数--config-file设定,默认为/etc/docker/daemon.json
    此处指定容器node1网桥IP为172.17.1.1/24,可以在DOCKER_NETWORK_OPTIONS中设置参数--bip=172.17.1.1/24,但是二者不能同时设置,否则docker无法启动。

    [root@node1 ~]# cat /etc/docker/daemon.json 
    {
    "bip": "172.17.1.1/24"
    }
    
  • 重启docker
    systemctl daemon-reload && sytemctl restart docker

  • 查看node1主机网络

    [root@node1 ~]# ifconfig
    docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet 172.17.1.1  netmask 255.255.255.0  broadcast 172.17.1.255
            inet6 fe80::42:23ff:fe43:87a4  prefixlen 64  scopeid 0x20<link>
            ether 02:42:23:43:87:a4  txqueuelen 0  (Ethernet)
            RX packets 2068  bytes 97452 (95.1 KiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 2847  bytes 17131048 (16.3 MiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet 192.168.159.4  netmask 255.255.255.0  broadcast 192.168.159.255
            inet6 fe80::69a7:b207:afb9:9db4  prefixlen 64  scopeid 0x20<link>
            ether 00:50:56:34:3a:20  txqueuelen 1000  (Ethernet)
            RX packets 474960  bytes 52578160 (50.1 MiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 461729  bytes 28211425 (26.9 MiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
            inet 127.0.0.1  netmask 255.0.0.0
            inet6 ::1  prefixlen 128  scopeid 0x10<host>
            loop  txqueuelen 1000  (Local Loopback)
            RX packets 0  bytes 0 (0.0 B)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 0  bytes 0 (0.0 B)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    vethf5fadcf: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet6 fe80::404e:afff:feec:c983  prefixlen 64  scopeid 0x20<link>
            ether 42:4e:af:ec:c9:83  txqueuelen 0  (Ethernet)
            RX packets 187  bytes 17934 (17.5 KiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 15  bytes 950 (950.0 B)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
  • 查看容器node1网络

    [root@0402a3fb29e6 /]# ifconfig
    eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet 172.17.1.2  netmask 255.255.255.0  broadcast 172.17.1.255
            ether 02:42:ac:11:01:02  txqueuelen 0  (Ethernet)
            RX packets 8  bytes 656 (656.0 B)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 0  bytes 0 (0.0 B)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
            inet 127.0.0.1  netmask 255.0.0.0
            loop  txqueuelen 1000  (Local Loopback)
            RX packets 0  bytes 0 (0.0 B)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 0  bytes 0 (0.0 B)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
修改node2主机的Docker0的网络地址
  • 修改/etc/docker/daemon.json

    指定容器node2网桥IP为172.17.0.1/24

    [root@node1 ~]# cat /etc/docker/daemon.json 
    {
    "bip": "172.17.0.1/24"
    }
    
  • 重启docker
    systemctl daemon-reload && sytemctl restart docker

  • 查看node2网络

    [root@node2 ~]# ifconfig
    docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet 172.17.0.1  netmask 255.255.255.0  broadcast 172.17.0.255
            inet6 fe80::42:a6ff:feb7:78bf  prefixlen 64  scopeid 0x20<link>
            ether 02:42:a6:b7:78:bf  txqueuelen 0  (Ethernet)
            RX packets 4121  bytes 170163 (166.1 KiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 4782  bytes 17294541 (16.4 MiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet 192.168.159.5  netmask 255.255.255.0  broadcast 192.168.159.255
            inet6 fe80::69a7:b207:afb9:9db4  prefixlen 64  scopeid 0x20<link>
            inet6 fe80::2b45:985c:3f4e:908b  prefixlen 64  scopeid 0x20<link>
            inet6 fe80::c846:2166:96b5:5f86  prefixlen 64  scopeid 0x20<link>
            ether 00:50:56:24:97:9a  txqueuelen 1000  (Ethernet)
            RX packets 18895  bytes 18246491 (17.4 MiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 8037  bytes 815344 (796.2 KiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
            inet 127.0.0.1  netmask 255.0.0.0
            inet6 ::1  prefixlen 128  scopeid 0x10<host>
            loop  txqueuelen 1000  (Local Loopback)
            RX packets 68  bytes 5904 (5.7 KiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 68  bytes 5904 (5.7 KiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    veth09bb648: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet6 fe80::9069:b6ff:fed1:8797  prefixlen 64  scopeid 0x20<link>
            ether 92:69:b6:d1:87:97  txqueuelen 0  (Ethernet)
            RX packets 4  bytes 280 (280.0 B)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 12  bytes 936 (936.0 B)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
容器node1访问容器node2

无法访问

[root@0402a3fb29e6 /]# ping -c 3 172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.

--- 172.17.0.2 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2000ms

tcpdump抓取主机ens33网卡

[root@node1 ~]# tcpdump -i ens33 -n icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ens33, link-type EN10MB (Ethernet), capture size 262144 bytes
17:12:19.636080 IP 192.168.159.4 > 172.17.0.2: ICMP echo request, id 37, seq 1, length 64
17:12:20.640152 IP 192.168.159.4 > 172.17.0.2: ICMP echo request, id 37, seq 2, length 64
17:12:21.640212 IP 192.168.159.4 > 172.17.0.2: ICMP echo request, id 37, seq 3, length 64
容器node2访问容器node1

无法访问

[root@bc6194e6391e /]# ping -c 3 172.17.1.2
PING 172.17.1.2 (172.17.1.2) 56(84) bytes of data.
From 172.17.0.2 icmp_seq=1 Destination Host Unreachable
From 172.17.0.2 icmp_seq=2 Destination Host Unreachable
From 172.17.0.2 icmp_seq=3 Destination Host Unreachable

--- 172.17.1.2 ping statistics ---
3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 1999ms
pipe 3
添加静态路由
  • node1主机
    ip route add 172.17.0.0/24 via 192.168.159.5

    [root@node1 ~]# route -n
    Kernel IP routing table
    Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
    0.0.0.0         192.168.159.2   0.0.0.0         UG    100    0        0 ens33
    172.17.0.0      192.168.159.5   255.255.255.0   UG    0      0        0 ens33
    172.17.1.0      0.0.0.0         255.255.255.0   U     0      0        0 docker0
    192.168.159.0   0.0.0.0         255.255.255.0   U     100    0        0 ens33
    

    添加永久路由方式

    cat > /etc/sysconfig/static-routes << EOF
    any net 172.17.0.0 netmask 255.255.255.0 gw 192.168.159.5 dev ens33
    EOF
    
  • node2主机
    ip route add 172.17.1.0/24 via 192.168.159.4

    [root@node2 ~]# route -n
    Kernel IP routing table
    Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
    0.0.0.0         192.168.159.2   0.0.0.0         UG    100    0        0 ens33
    172.17.0.0      0.0.0.0         255.255.255.0   U     0      0        0 docker0
    172.17.1.0      192.168.159.4   255.255.255.0   UG    0      0        0 ens33
    192.168.159.0   0.0.0.0         255.255.255.0   U     100    0        0 ens33
    

    添加永久路由方式

    cat > /etc/sysconfig/static-routes << EOF
    any net 172.17.1.0 netmask 255.255.255.0 gw 192.168.159.4 dev ens33
    EOF
    
容器node1访问容器node2
  • 容器node1访问容器node2

    [root@0402a3fb29e6 /]# ping 172.17.0.2 -c 3
    PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
    64 bytes from 172.17.0.2: icmp_seq=1 ttl=62 time=1.34 ms
    64 bytes from 172.17.0.2: icmp_seq=2 ttl=62 time=0.934 ms
    64 bytes from 172.17.0.2: icmp_seq=3 ttl=62 time=0.425 ms
    
    --- 172.17.0.2 ping statistics ---
    3 packets transmitted, 3 received, 0% packet loss, time 2004ms
    rtt min/avg/max/mdev = 0.425/0.902/1.347/0.377 ms
    
  • 容器node2抓包eth0

    [root@bc6194e6391e /]# tcpdump -i eth0 -n icmp
    tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
    listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
    09:49:31.700980 IP 192.168.159.4 > 172.17.0.2: ICMP echo request, id 46, seq 1, length 64
    09:49:31.701198 IP 172.17.0.2 > 192.168.159.4: ICMP echo reply, id 46, seq 1, length 64
    09:49:32.703187 IP 192.168.159.4 > 172.17.0.2: ICMP echo request, id 46, seq 2, length 64
    09:49:32.703231 IP 172.17.0.2 > 192.168.159.4: ICMP echo reply, id 46, seq 2, length 64
    09:49:33.704675 IP 192.168.159.4 > 172.17.0.2: ICMP echo request, id 46, seq 3, length 64
    09:49:33.704695 IP 172.17.0.2 > 192.168.159.4: ICMP echo reply, id 46, seq 3, length 64
    
  • 容器node2访问容器node1

    [root@bc6194e6391e /]# ping 172.17.1.2 -c 3
    PING 172.17.1.2 (172.17.1.2) 56(84) bytes of data.
    64 bytes from 172.17.1.2: icmp_seq=1 ttl=62 time=1.69 ms
    64 bytes from 172.17.1.2: icmp_seq=2 ttl=62 time=0.987 ms
    64 bytes from 172.17.1.2: icmp_seq=3 ttl=62 time=1.15 ms
    
    --- 172.17.1.2 ping statistics ---
    3 packets transmitted, 3 received, 0% packet loss, time 2004ms
    rtt min/avg/max/mdev = 0.987/1.279/1.695/0.304 ms
    
  • 容器node1抓包eth0

    [root@0402a3fb29e6 /]# tcpdump -i eth0 -n icmp
    tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
    listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
    09:52:51.691610 IP 192.168.159.5 > 172.17.1.2: ICMP echo request, id 21, seq 1, length 64
    09:52:51.691738 IP 172.17.1.2 > 192.168.159.5: ICMP echo reply, id 21, seq 1, length 64
    09:52:52.693258 IP 192.168.159.5 > 172.17.1.2: ICMP echo request, id 21, seq 2, length 64
    09:52:52.693361 IP 172.17.1.2 > 192.168.159.5: ICMP echo reply, id 21, seq 2, length 64
    09:52:53.695265 IP 192.168.159.5 > 172.17.1.2: ICMP echo request, id 21, seq 3, length 64
    09:52:53.695350 IP 172.17.1.2 > 192.168.159.5: ICMP echo reply, id 21, seq 3, length 64
    

通过配置静态路由的方式达到了跨主机docker容器的网络互访,但是在大规模集群中手动配置过于低效,下面将进一步进行优化。

使用fannel配置docker集群网络
环境准备
  • 停止docker服务
    systemctl stop docker

  • 启动etcd

    flaanel使用etcd作为数据库,存放网络配置、已分配 的subnet、host的IP等信息,
    所以要先保证etcd已正确启动,前边章节我们已经介绍了ETCD集群的配置,在这里直接使用。

    [root@node1 pki]# etcdctl -ca-file=ca.pem -cert-file=etcdctl.pem -key-file=etcdctl-key.pem cluster-health
    member 46899d42c87d524e is healthy: got healthy result from https://192.168.159.4:2379
    member a3ec213779ea2c81 is healthy: got healthy result from https://192.168.159.3:2379
    cluster is healthy
    
  • 清除路由
    sed -i '$d' /etc/sysconfig/static-routes && systemctl restart network

    [root@node1 etc]# route -n
    Kernel IP routing table
    Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
    0.0.0.0         192.168.159.2   0.0.0.0         UG    100    0        0 ens33
    192.168.159.0   0.0.0.0         255.255.255.0   U     100    0        0 ens33
    
    [root@node2 ~]# route -n
    Kernel IP routing table
    Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
    0.0.0.0         192.168.159.2   0.0.0.0         UG    100    0        0 ens33
    192.168.159.0   0.0.0.0         255.255.255.0   U     100    0        0 ens33
    
添加flannel网络信息到etcd
  • Flannel网络配置

    flannel-etcd-config.json

    {
        "Network": "172.17.0.0/8",
        "SubnetLen": "24",
        "SubnetMin": "172.17.0.0",
        "SudbnetMax": "172.17.255.0",
        "Backend": {
            "Type": "vxlan",
            "VNI": 1,      
            "Port": 8472         
        }
    }
    
    • 字段释义
      Backend文档地址
        Network: 指定flannel地址池,必须至少能容纳四个子网;
        SubnetLen: 宿主机docker0的子网掩码长度,注意为int类型数字;
        SubnetMin: 分配给子网的起始IP,默认为地址池第一个子网IP;
        SudbnetMax: 分配给子网的最后一个IP,默认为地址池最后一个子网IP;
        Backend: 使用后端类型及其后端特征:
          Type: 后端类型,默认udp,推荐vxlan,性能最优host-gw但不支持云环境;
          VNI: 要使用的VXLAN标识符(VNI)。在Linux上,默认值为1。在Windows上应该大于或等于4096;
          Port: UDP端口,用于发送封装的数据包。在Linux上,内核默认值为8472,但在Windows上必须是4789。
      
  • 添加配置到etcd

    [root@node1 pki]# etcdctl -ca-file=ca.pem -cert-file=etcdctl.pem -key-file=etcdctl-key.pem set /k8s/network/config < /opt/k8s/node/etc/flannel-etcd-config.json 
    {
        "Network": "172.17.0.0",
        "SubnetLen": 24,
        "SubnetMin": "172.17.0.0",
        "SudbnetMax": "172.17.255.0",
        "Backend": {
            "Type": "vxlan",
            "VNI": 1,      
            "Port": 8472         
        }
    } 
    
  • 确认是否配置到etcd

    [root@node1 pki]# etcdctl -ca-file=ca.pem -cert-file=etcdctl.pem -key-file=etcdctl-key.pem get /k8s/network/config
    {
        "Network": "172.17.0.0/8",
        "SubnetLen": 24,
        "SubnetMin": "172.17.0.0",
        "SudbnetMax": "172.17.255.0",
        "Backend": {
            "Type": "vxlan",
            "VNI": 1,      
            "Port": 8472         
        }
    }
    
flannel安装

以下操作在node1和node2同时执行

官方文档
参考博客

flannel下载

官方下载地址

wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz
tar -zxvf flannel-v0.11.0-linux-amd64.tar.gz -C flannel
cd flannel
cp -f flanneld mk-docker-opts.sh /usr/local/bin
flannel配置文件
cat > /opt/k8s/node/etc/flanneld.conf << EOF
FLANNELD_INSECURE_OPTS="--etcd-cafile=/opt/etcd/pki/ca.pem \
--etcd-certfile=/opt/etcd/pki/etcdctl.pem \
--etcd-keyfile=/opt/etcd/pki/etcdctl-key.pem \
--etcd-endpoints=https://192.168.159.3:2379,https://192.168.159.4:2379 \
--etcd-prefix=/k8s/network \
--kube-subnet-mgr=false \
--subnet-file=/run/flannel/subnet.env \
--iface=ens33 \
--iptables-resync=5 \
--ip-masq=true"
EOF
  • 参数说明

    参数名说明
    --etcd-cafileetcd根证书
    --etcd-certfileetcd客户端证书
    --etcd-keyfileetcd客户端证书私钥
    --etcd-endpointsetcd服务地址
    --etcd-prefixetcd配置的flannel网络的key
    --kube-subnet-mgrfalse网络配置从etcd中读取,true网络配置从--net-config-path指定的文件中读取
    --subnet-file生成子网信息保存在该文件
    --iface指定跨主机通信接口,默认为主机的默认路由地址,可以多次指定,按顺序查找
    --iptables-resynciptables刷新周期
    --ip-masq为flannel外的网络设置IP伪装
flannel服务文件
cat > /usr/lib/systemd/system/flanneld.service << EOF
[Unit]
Description=Flanneld
Documentation=https://github.com/coreos/flannel
After=network.target
After=network-online.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/opt/k8s/node/etc/flanneld.conf
ExecStart=/usr/local/bin/flanneld $FLANNELD_INSECURE_OPTS
ExecStartPost=/usr/local/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
  • 字段说明

    字段名说明
    Typenotify启动成功后返回READY=1
    Beforeflannel在docker前启动
    ExecStartPostExecStart执行成功后执行的操作,此处生成docker的子网信息
flannel服务启动
  • 启动

    systemctl daemon-reload && systemctl start flanneld
    
  • 验证

    • 服务状态

      [root@node1 pki]# systemctl status flanneld
      ● flanneld.service - Flanneld
         Loaded: loaded (/usr/lib/systemd/system/flanneld.service; disabled; vendor preset: disabled)
         Active: active (running) since 二 2019-08-27 15:11:05 CST; 30min ago
           Docs: https://github.com/coreos/flannel
        Process: 2447 ExecStartPost=/usr/local/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env (code=exited, status=0/SUCCESS)
       Main PID: 2432 (flanneld)
          Tasks: 8
         Memory: 8.4M
         CGroup: /system.slice/flanneld.service
                 └─2432 /usr/local/bin/flanneld --etcd-cafile=/opt/etcd/pki/ca.pem --etcd-certfile=/opt/etcd/pki/etcdctl.pem --etcd-keyfile=/opt/etcd/pki/etcdctl-key.pem --etcd-endpoints=https://192.168.159.3..
      
    • etcd存储信息

      [root@node1 pki]# etcdctl -ca-file=ca.pem -cert-file=etcdctl.pem -key-file=etcdctl-key.pem ls /k8s/network/subnets
      /k8s/network/subnets/172.17.83.0-24
      /k8s/network/subnets/172.17.46.0-24
      
      [root@node1 pki]# etcdctl -ca-file=ca.pem -cert-file=etcdctl.pem -key-file=etcdctl-key.pem get /k8s/network/subnets/172.17.83.0-24
      {"PublicIP":"192.168.159.4","BackendType":"vxlan","BackendData":{"VtepMAC":"5a:f0:7a:bf:de:79"}}
      
      [root@node1 pki]# etcdctl -ca-file=ca.pem -cert-file=etcdctl.pem -key-file=etcdctl-key.pem get /k8s/network/subnets/172.17.46.0-24
      {"PublicIP":"192.168.159.5","BackendType":"vxlan","BackendData":{"VtepMAC":"fe:32:4e:da:3f:21"}}
      
    • node1本地生成的docker子网信息

      [root@node1 pki]# cat /run/flannel/subnet.env 
      DOCKER_OPT_BIP="--bip=172.17.83.1/24"
      DOCKER_OPT_IPMASQ="--ip-masq=false"
      DOCKER_OPT_MTU="--mtu=1450"
      DOCKER_NETWORK_OPTIONS=" --bip=172.17.83.1/24 --ip-masq=false --mtu=1450"
      
    • node2本地生成的docker子网信息

      [root@node2 pki]# cat /run/flannel/subnet.env 
      DOCKER_OPT_BIP="--bip=172.17.46.1/24"
      DOCKER_OPT_IPMASQ="--ip-masq=false"
      DOCKER_OPT_MTU="--mtu=1450"
      DOCKER_NETWORK_OPTIONS=" --bip=172.17.46.1/24 --ip-masq=false --mtu=1450"
      
    • node1主机网卡

      [root@node1 pki]# ifconfig
      ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
              inet 192.168.159.4  netmask 255.255.255.0  broadcast 192.168.159.255
              inet6 fe80::69a7:b207:afb9:9db4  prefixlen 64  scopeid 0x20<link>
              ether 00:50:56:34:3a:20  txqueuelen 1000  (Ethernet)
              RX packets 430580  bytes 54653587 (52.1 MiB)
              RX errors 0  dropped 0  overruns 0  frame 0
              TX packets 431431  bytes 56850150 (54.2 MiB)
              TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
      
      flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
              inet 172.17.83.0  netmask 255.255.255.255  broadcast 0.0.0.0
              inet6 fe80::58f0:7aff:febf:de79  prefixlen 64  scopeid 0x20<link>
              ether 5a:f0:7a:bf:de:79  txqueuelen 0  (Ethernet)
              RX packets 0  bytes 0 (0.0 B)
              RX errors 0  dropped 0  overruns 0  frame 0
              TX packets 0  bytes 0 (0.0 B)
              TX errors 0  dropped 8 overruns 0  carrier 0  collisions 0
      
      lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
              inet 127.0.0.1  netmask 255.0.0.0
              inet6 ::1  prefixlen 128  scopeid 0x10<host>
              loop  txqueuelen 1000  (Local Loopback)
              RX packets 2749  bytes 333239 (325.4 KiB)
              RX errors 0  dropped 0  overruns 0  frame 0
              TX packets 2749  bytes 333239 (325.4 KiB)
              TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
      
    • node2主机网卡

      [root@node2 ~]# ifconfig
      ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
              inet 192.168.159.5  netmask 255.255.255.0  broadcast 192.168.159.255
              inet6 fe80::69a7:b207:afb9:9db4  prefixlen 64  scopeid 0x20<link>
              inet6 fe80::2b45:985c:3f4e:908b  prefixlen 64  scopeid 0x20<link>
              ether 00:50:56:24:97:9a  txqueuelen 1000  (Ethernet)
              RX packets 1355  bytes 128086 (125.0 KiB)
              RX errors 0  dropped 0  overruns 0  frame 0
              TX packets 1421  bytes 157618 (153.9 KiB)
              TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
      
      flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
              inet 172.17.46.0  netmask 255.255.255.255  broadcast 0.0.0.0
              inet6 fe80::f828:21ff:fef6:2bd4  prefixlen 64  scopeid 0x20<link>
              ether fa:28:21:f6:2b:d4  txqueuelen 0  (Ethernet)
              RX packets 0  bytes 0 (0.0 B)
              RX errors 0  dropped 0  overruns 0  frame 0
              TX packets 0  bytes 0 (0.0 B)
              TX errors 0  dropped 8 overruns 0  carrier 0  collisions 0
      
      lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
              inet 127.0.0.1  netmask 255.0.0.0
              inet6 ::1  prefixlen 128  scopeid 0x10<host>
              loop  txqueuelen 1000  (Local Loopback)
              RX packets 64  bytes 5568 (5.4 KiB)
              RX errors 0  dropped 0  overruns 0  frame 0
              TX packets 64  bytes 5568 (5.4 KiB)
              TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
      
重新配置docker服务
服务文件配置

修改EnvironmentFile等于flanneldocker生成的网络配置/run/flannel/subnet.env

cat > docker.service << EOF 
[Unit]
Description=Docker Engine Service
Documentation=https://docs.docker.com
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin"
EnvironmentFile=-/run/flannel/subnet.env
ExecStart=/usr/local/bin/dockerd $DOCKER_NETWORK_OPTIONS 
ExecReload=/bin/kill -s HUP $MAINPID
Restart=on-failure
RestartSec=5
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Delegate=yes
KillMode=process

[Install]
WantedBy=multi-user.target
EOF
服务试启动
[root@node1 etc]# systemctl daemon-reload && systemctl start docker
Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details.

通过journalctl -xe注意到日志中报如下错误:

unable to configure the Docker daemon with file /etc/docker/daemon.json: the following directives are specified both as a flag and in the configuration file: bip: (from flag: 172.17.83.1/24, from file: 172.17.1.1/24)

问题解决:
前边通过直接路由方式设置跨主机容器网络时候说过/etc/docker/daemon.json文件与参数--bip不能同时设置,在变量DOCKER_NETWORK_OPTIONS中已经包含了参数--bip,因此需要把/etc/docker/daemon.json文件删除。

rm -f /etc/docker/daemon.json
systemctl daemon-reload && systemctl start docker
网络查看

node1主机网络

[root@node1 etc]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.83.1  netmask 255.255.255.0  broadcast 172.17.83.255
        inet6 fe80::42:e2ff:feaf:525d  prefixlen 64  scopeid 0x20<link>
        ether 02:42:e2:af:52:5d  txqueuelen 0  (Ethernet)
        RX packets 156  bytes 9898 (9.6 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 67  bytes 80756 (78.8 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.159.4  netmask 255.255.255.0  broadcast 192.168.159.255
        inet6 fe80::69a7:b207:afb9:9db4  prefixlen 64  scopeid 0x20<link>
        ether 00:50:56:34:3a:20  txqueuelen 1000  (Ethernet)
        RX packets 530560  bytes 67349085 (64.2 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 533082  bytes 70220830 (66.9 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 172.17.83.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::58f0:7aff:febf:de79  prefixlen 64  scopeid 0x20<link>
        ether 5a:f0:7a:bf:de:79  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 8 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 2841  bytes 338023 (330.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2841  bytes 338023 (330.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

node2主机网络

[root@node2 etc]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.46.1  netmask 255.255.255.0  broadcast 172.17.46.255
        inet6 fe80::42:d4ff:fe47:66f4  prefixlen 64  scopeid 0x20<link>
        ether 02:42:d4:47:66:f4  txqueuelen 0  (Ethernet)
        RX packets 200  bytes 12034 (11.7 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 40  bytes 3394 (3.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.159.5  netmask 255.255.255.0  broadcast 192.168.159.255
        inet6 fe80::69a7:b207:afb9:9db4  prefixlen 64  scopeid 0x20<link>
        inet6 fe80::2b45:985c:3f4e:908b  prefixlen 64  scopeid 0x20<link>
        ether 00:50:56:24:97:9a  txqueuelen 1000  (Ethernet)
        RX packets 2201  bytes 199369 (194.6 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2147  bytes 256110 (250.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 172.17.46.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::f828:21ff:fef6:2bd4  prefixlen 64  scopeid 0x20<link>
        ether fa:28:21:f6:2b:d4  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 8 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 64  bytes 5568 (5.4 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 64  bytes 5568 (5.4 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
测试跨主机容器互访
  • 启动node1容器
    启动容器
    docker start node1

    查看宿主机网络

    [root@node1 etc]# ifconfig
    docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
            inet 172.17.83.1  netmask 255.255.255.0  broadcast 172.17.83.255
            inet6 fe80::42:e2ff:feaf:525d  prefixlen 64  scopeid 0x20<link>
            ether 02:42:e2:af:52:5d  txqueuelen 0  (Ethernet)
            RX packets 301  bytes 21598 (21.0 KiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 95  bytes 82756 (80.8 KiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet 192.168.159.4  netmask 255.255.255.0  broadcast 192.168.159.255
            inet6 fe80::69a7:b207:afb9:9db4  prefixlen 64  scopeid 0x20<link>
            ether 00:50:56:34:3a:20  txqueuelen 1000  (Ethernet)
            RX packets 613016  bytes 77826826 (74.2 MiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 616778  bytes 81210151 (77.4 MiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
            inet 172.17.83.0  netmask 255.255.255.255  broadcast 0.0.0.0
            inet6 fe80::58f0:7aff:febf:de79  prefixlen 64  scopeid 0x20<link>
            ether 5a:f0:7a:bf:de:79  txqueuelen 0  (Ethernet)
            RX packets 22  bytes 1536 (1.5 KiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 139  bytes 11712 (11.4 KiB)
            TX errors 0  dropped 8 overruns 0  carrier 0  collisions 0
    
    lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
            inet 127.0.0.1  netmask 255.0.0.0
            inet6 ::1  prefixlen 128  scopeid 0x10<host>
            loop  txqueuelen 1000  (Local Loopback)
            RX packets 2917  bytes 341975 (333.9 KiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 2917  bytes 341975 (333.9 KiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    veth9c659ff: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
            inet6 fe80::80a2:90ff:fe94:c603  prefixlen 64  scopeid 0x20<link>
            ether 82:a2:90:94:c6:03  txqueuelen 0  (Ethernet)
            RX packets 145  bytes 13730 (13.4 KiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 36  bytes 2656 (2.5 KiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
          
    [root@node1 etc]# route -n
    Kernel IP routing table
    Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
    0.0.0.0         192.168.159.2   0.0.0.0         UG    100    0        0 ens33
    172.17.46.0     172.17.46.0     255.255.255.0   UG    0      0        0 flannel.1
    172.17.83.0     0.0.0.0         255.255.255.0   U     0      0        0 docker0
    192.168.159.0   0.0.0.0         255.255.255.0   U     100    0        0 ens33          
    

    进入node1容器
    docker attach node1

    查看node1容器网络

    [root@0402a3fb29e6 /]# ifconfig
    eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
            inet 172.17.83.2  netmask 255.255.255.0  broadcast 172.17.83.255
            ether 02:42:ac:11:53:02  txqueuelen 0  (Ethernet)
            RX packets 36  bytes 2656 (2.5 KiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 145  bytes 13730 (13.4 KiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
            inet 127.0.0.1  netmask 255.0.0.0
            loop  txqueuelen 1000  (Local Loopback)
            RX packets 50  bytes 4200 (4.1 KiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 50  bytes 4200 (4.1 KiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
  • 启动node2容器
    启动容器
    docker start node2

    查看宿主机网络

    [root@node2 ~]# ifconfig
    docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
            inet 172.17.46.1  netmask 255.255.255.0  broadcast 172.17.46.255
            inet6 fe80::42:d4ff:fe47:66f4  prefixlen 64  scopeid 0x20<link>
            ether 02:42:d4:47:66:f4  txqueuelen 0  (Ethernet)
            RX packets 411  bytes 26542 (25.9 KiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 79  bytes 6604 (6.4 KiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet 192.168.159.5  netmask 255.255.255.0  broadcast 192.168.159.255
            inet6 fe80::69a7:b207:afb9:9db4  prefixlen 64  scopeid 0x20<link>
            inet6 fe80::2b45:985c:3f4e:908b  prefixlen 64  scopeid 0x20<link>
            ether 00:50:56:24:97:9a  txqueuelen 1000  (Ethernet)
            RX packets 3649  bytes 336880 (328.9 KiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 3511  bytes 407431 (397.8 KiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
            inet 172.17.46.0  netmask 255.255.255.255  broadcast 0.0.0.0
            inet6 fe80::f828:21ff:fef6:2bd4  prefixlen 64  scopeid 0x20<link>
            ether fa:28:21:f6:2b:d4  txqueuelen 0  (Ethernet)
            RX packets 18  bytes 1548 (1.5 KiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 190  bytes 13560 (13.2 KiB)
            TX errors 0  dropped 8 overruns 0  carrier 0  collisions 0
    
    lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
            inet 127.0.0.1  netmask 255.0.0.0
            inet6 ::1  prefixlen 128  scopeid 0x10<host>
            loop  txqueuelen 1000  (Local Loopback)
            RX packets 64  bytes 5568 (5.4 KiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 64  bytes 5568 (5.4 KiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    veth71ae48a: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
            inet6 fe80::ccb2:8eff:fe58:e9f9  prefixlen 64  scopeid 0x20<link>
            ether ce:b2:8e:58:e9:f9  txqueuelen 0  (Ethernet)
            RX packets 211  bytes 17462 (17.0 KiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 47  bytes 3866 (3.7 KiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    [root@node2 ~]# route -n
    Kernel IP routing table
    Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
    0.0.0.0         192.168.159.2   0.0.0.0         UG    100    0        0 ens33
    172.17.46.0     0.0.0.0         255.255.255.0   U     0      0        0 docker0
    172.17.83.0     172.17.83.0     255.255.255.0   UG    0      0        0 flannel.1
    192.168.159.0   0.0.0.0         255.255.255.0   U     100    0        0 ens33
    

    进入node2容器
    docker attach node2

    查看node2容器网络

    [root@bc6194e6391e /]# ifconfig
    eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
            inet 172.17.46.2  netmask 255.255.255.0  broadcast 172.17.46.255
            ether 02:42:ac:11:2e:02  txqueuelen 0  (Ethernet)
            RX packets 47  bytes 3866 (3.7 KiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 211  bytes 17462 (17.0 KiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
            inet 127.0.0.1  netmask 255.0.0.0
            loop  txqueuelen 1000  (Local Loopback)
            RX packets 0  bytes 0 (0.0 B)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 0  bytes 0 (0.0 B)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
  • 防火墙设置

    由于flannel采用了UDP的8472端口进行封装数据包发送,因此需要开放该端口

    firewall-cmd --zone=public --add-port=8472/udp --permanent
    firewall-cmd reload
    
    [root@node2 ~]# firewall-cmd --list-all
    public (active)
      target: default
      icmp-block-inversion: no
      interfaces: ens33
      sources: 
      services: ssh dhcpv6-client
      ports: 2380/tcp 2379/tcp 8472/udp
      protocols: 
      masquerade: no
      forward-ports: 
      source-ports: 
      icmp-blocks: 
      rich rules:
    
  • 容器node1网络测试

    # node1主机访问node1容器
    [root@node1 etc]# ping 172.17.83.2 -c 3
    PING 172.17.83.2 (172.17.83.2) 56(84) bytes of data.
    64 bytes from 172.17.83.2: icmp_seq=1 ttl=64 time=0.103 ms
    64 bytes from 172.17.83.2: icmp_seq=2 ttl=64 time=0.116 ms
    64 bytes from 172.17.83.2: icmp_seq=3 ttl=64 time=0.171 ms
    
    --- 172.17.83.2 ping statistics ---
    3 packets transmitted, 3 received, 0% packet loss, time 1999ms
    rtt min/avg/max/mdev = 0.103/0.130/0.171/0.029 ms
    
    # node2主机访问node1容器
    [root@node2 ~]# ping 172.17.83.2 -c 3
    PING 172.17.83.2 (172.17.83.2) 56(84) bytes of data.
    64 bytes from 172.17.83.2: icmp_seq=1 ttl=63 time=0.823 ms
    64 bytes from 172.17.83.2: icmp_seq=2 ttl=63 time=0.699 ms
    64 bytes from 172.17.83.2: icmp_seq=3 ttl=63 time=0.748 ms
    
    --- 172.17.83.2 ping statistics ---
    3 packets transmitted, 3 received, 0% packet loss, time 2002ms
    rtt min/avg/max/mdev = 0.699/0.756/0.823/0.060 ms
    
    # node1容器访问node1主机
    [root@0402a3fb29e6 /]# ping 192.168.159.4 -c 3
    PING 192.168.159.4 (192.168.159.4) 56(84) bytes of data.
    64 bytes from 192.168.159.4: icmp_seq=1 ttl=64 time=0.140 ms
    64 bytes from 192.168.159.4: icmp_seq=2 ttl=64 time=0.120 ms
    64 bytes from 192.168.159.4: icmp_seq=3 ttl=64 time=0.132 ms
    
    --- 192.168.159.4 ping statistics ---
    3 packets transmitted, 3 received, 0% packet loss, time 2000ms
    rtt min/avg/max/mdev = 0.120/0.130/0.140/0.015 ms
    
    # node1容器访问node2主机
    [root@0402a3fb29e6 /]# ping 192.168.159.5 -c 3
    PING 192.168.159.5 (192.168.159.5) 56(84) bytes of data.
    64 bytes from 192.168.159.5: icmp_seq=1 ttl=63 time=1.08 ms
    64 bytes from 192.168.159.5: icmp_seq=2 ttl=63 time=0.670 ms
    64 bytes from 192.168.159.5: icmp_seq=3 ttl=63 time=0.711 ms
    
    --- 192.168.159.5 ping statistics ---
    3 packets transmitted, 3 received, 0% packet loss, time 2003ms
    rtt min/avg/max/mdev = 0.670/0.821/1.083/0.187 ms
    
    # node1容器访问node2容器  
    [root@0402a3fb29e6 /]# ping 172.17.46.2 -c 3
    PING 172.17.46.2 (172.17.46.2) 56(84) bytes of data.
    64 bytes from 172.17.46.2: icmp_seq=1 ttl=62 time=1.55 ms
    64 bytes from 172.17.46.2: icmp_seq=2 ttl=62 time=0.862 ms
    64 bytes from 172.17.46.2: icmp_seq=3 ttl=62 time=0.757 ms
    
    --- 172.17.46.2 ping statistics ---
    3 packets transmitted, 3 received, 0% packet loss, time 2002ms
    rtt min/avg/max/mdev = 0.757/1.058/1.556/0.355 ms
    
    # node1容器访问百度  
    [root@0402a3fb29e6 /]# ping www.baidu.com -c 3
    PING www.a.shifen.com (14.215.177.39) 56(84) bytes of data.
    64 bytes from 14.215.177.39 (14.215.177.39): icmp_seq=1 ttl=127 time=36.0 ms
    64 bytes from 14.215.177.39 (14.215.177.39): icmp_seq=2 ttl=127 time=35.9 ms
    64 bytes from 14.215.177.39 (14.215.177.39): icmp_seq=3 ttl=127 time=35.8 ms
    
    --- www.a.shifen.com ping statistics ---
    3 packets transmitted, 3 received, 0% packet loss, time 2003ms
    rtt min/avg/max/mdev = 35.882/35.951/36.009/0.163 ms
    
  • 容器node2网络测试

至此Docker跨主机集群网络配置完毕,下一步接着探讨Node节点的其他服务安装。

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐