在K8s集群中,所有的Node结点都要安装和配置 Flannel 网络。
Flannel 网络的作用:在K8s集群的各个Node结点上,运行的容器之间需要 跨宿主机 通信(即联网)。那么,该网络的作用其实就是要解决Docker容器 跨宿主机 进行通信的问题。
其作用,跟 MacVlanOverlay 网络的作用一样。

开始实验:
1、分别在master和node节点上安装Flannel网络

yum   install   flannel   -y

2、开始在三个节点上配置flannel网络

vim /etc/sysconfig
# Flanneld configuration options  

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://192.168.200.100:2379" #修改为master的IP,因为本机已经试验成功,所以已经是修改过的状态

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/atomic.io/network" 

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""

PS:node1和node2的配置如上,修改的IP为master的IP

3、在 k8s-master 结点上配置 Flannel 网络的IP网段

etcdctl   set   /atomic.io/network/config   '{ "Network":"172.16.0.0/16" }'

4、启动服务:

systemctl restart flanneld && systemctl enable flanneld
systemctl restart docker #必须要重新启动Docker服务;

接下来开始测试:
测试 跨宿主机 的容器之间能否相互通信?

分别在 1个Master 、 2个Node结点 上启动一个容器,来测试在这3个结点上的容器之间能否相互通信?
以下操作,分别在 3个结点 上进行操作。

首先修改防火墙:
从Docker 1.13 版本及以后,修改了 iptables 的 filter 表中的数据包转发链 Chain FORWARD 的默认规则为 DROP(丢弃) ,而不是 ACCEPT(接受) ,所以ping不通了。

所以,我们要将其默认规则 DROP(丢弃) 修改为 ACCEPT(接受)即可。
三个节点都操作:

iptables   -P  FORWARD    ACCEPT  #临时有效,重启之后就无效了
iptables   -t  filter    -L   -n #查看详细信息

但是如何永久有效呢?
iptables -P FORWARD ACCEPT添加到docker服务的启动文件当中。

vim /usr/lib/systemd/system/docker.service
[Service]
Type=notify
NotifyAccess=main
EnvironmentFile=-/run/containers/registries.conf
EnvironmentFile=-/etc/sysconfig/docker
EnvironmentFile=-/etc/sysconfig/docker-storage
EnvironmentFile=-/etc/sysconfig/docker-network
Environment=GOTRACEBACK=crash
Environment=DOCKER_HTTP_HOST_COMPAT=1
Environment=PATH=/usr/libexec/docker:/usr/bin:/usr/sbin
ExecStartPost=/usr/sbin/iptables  -P FORWARD  ACCEPT #添加这一行
ExecStart=/usr/bin/dockerd-current \

重启docker:

systemctl daemon-reload && systemctl restart docker

开始测试:
三个节点运行一个镜像busybox

master:
docker pull busybox
docker run -it docker.io/busybox
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
5: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1472 qdisc noqueue 
    link/ether 02:42:ac:10:36:02 brd ff:ff:ff:ff:ff:ff
    inet 172.16.54.2/24 scope global eth0  
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe10:3602/64 scope link 
       valid_lft forever preferred_lft forever

node1:
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
5: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1472 qdisc noqueue 
    link/ether 02:42:ac:10:4d:02 brd ff:ff:ff:ff:ff:ff
    inet 172.16.77.2/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe10:4d02/64 scope link 
       valid_lft forever preferred_lft forever

node2:
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
5: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1472 qdisc noqueue 
    link/ether 02:42:ac:10:01:02 brd ff:ff:ff:ff:ff:ff
    inet 172.16.1.2/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe10:102/64 scope link 
       valid_lft forever preferred_lft forever

记录一下各节点busybox的IP:

master的busybox镜像IP为:172.16.54.2
node1的busybox镜像IP为:172.16.77.2
node2的busybox镜像IP为:172.16.1.2

然后开始测试跨宿主机能不能互相ping通:
首先测试master的busybox镜像IPping别的主机容器:

/ # ping -c2 172.16.77.2  #ping node1上的busybox容器
PING 172.16.77.2 (172.16.77.2): 56 data bytes
64 bytes from 172.16.77.2: seq=0 ttl=60 time=2.158 ms
64 bytes from 172.16.77.2: seq=1 ttl=60 time=0.490 ms #成功

--- 172.16.77.2 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.490/1.324/2.158 ms
/ # ping -c2 172.16.1.2     #ping node2上的busybox容器
PING 172.16.1.2 (172.16.1.2): 56 data bytes
64 bytes from 172.16.1.2: seq=0 ttl=60 time=1.698 ms
64 bytes from 172.16.1.2: seq=1 ttl=60 time=0.841 ms #成功

--- 172.16.1.2 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.841/1.269/1.698 ms

开始node1测试:

/ # ping -c2 172.16.54.2
PING 172.16.54.2 (172.16.54.2): 56 data bytes
64 bytes from 172.16.54.2: seq=0 ttl=60 time=1.817 ms
64 bytes from 172.16.54.2: seq=1 ttl=60 time=1.438 ms

--- 172.16.54.2 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 1.438/1.627/1.817 ms
/ # ping -c2 172.16.1.2
PING 172.16.1.2 (172.16.1.2): 56 data bytes
64 bytes from 172.16.1.2: seq=0 ttl=60 time=1.995 ms
64 bytes from 172.16.1.2: seq=1 ttl=60 time=1.699 ms #成功,ping的通master

--- 172.16.1.2 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 1.699/1.847/1.995 ms  #也一样成功

开始node2测试:

/ # ping -c2 172.16.77.2
PING 172.16.77.2 (172.16.77.2): 56 data bytes
64 bytes from 172.16.77.2: seq=0 ttl=60 time=0.619 ms
64 bytes from 172.16.77.2: seq=1 ttl=60 time=0.492 ms #ping node1成功

--- 172.16.77.2 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.492/0.555/0.619 ms
/ # ping -c2 172.16.54.2
PING 172.16.54.2 (172.16.54.2): 56 data bytes
64 bytes from 172.16.54.2: seq=0 ttl=60 time=1.168 ms
64 bytes from 172.16.54.2: seq=1 ttl=60 time=0.715 ms #ping的通master

--- 172.16.54.2 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.715/0.941/1.168 ms

这样就实现了K8S中容器之间的互通

-----来自河南经贸19级计算机工程学院的一名普通学生,通过博客来分享自己日常收获到的新知识,会持续坚持,感谢大家的阅读,希望可以帮到你!
转载的希望可以放上本文章的链接,谢谢!

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐