Docker实现跨主机容器实例网络通信(1)——利用LinuxBridge构建多主机Docker网络
前面我们已经提到了如果我们构建docker集群,肯定会出现跨主机docker实例网络连接的需求,而且为了节省主机网络的IP资源,我们尽量使用docker0自己的网络连接,适当为主节点(容器实例)添加主机网络IP,这样应该是比较理想的业务需求。
题记
前面我们已经提到了如果我们构建docker集群,肯定会出现跨主机docker实例网络连接的需求,而且为了节省主机网络的IP资源,我们尽量使用docker0自己的网络连接,适当为主节点(容器实例)添加主机网络IP,这样应该是比较理想的业务需求。
--------------------------------------------------------------------------------------
Blog: http://blog.csdn.net/chinagissoft
QQ群:16403743
宗旨:专注于"GIS+"前沿技术的研究与交流,将云计算技术、大数据技术、容器技术、物联网与GIS进行深度融合,探讨"GIS+"技术和行业解决方案
转载说明:文章允许转载,但必须以链接方式注明源地址,否则追究法律责任!
--------------------------------------------------------------------------------------
为了充分利用docker0网桥的IP资源,我们为每台主机上的Docker daemon指定不同的--fixed-cidr参数,将不同主机上的Docker容器的地址限定在不同的网段中。
如上图所示,我有两台主机,分别包含两个网卡 eth0,eth1,我的eth1作为管理网卡,IP如上所示。
默认情况下,每个机器都会创建一个docker0网桥,IP的都是172.17.0.1,不过为了避免IP冲突,将第一台云主机的docker0网桥IP修改为172.17.0.2,确保相互之间可以连通。
1、第一个主机,修改docker0网桥的网络信息
sm@controller:~$ cat /etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
# The loopback network interface
auto lo
iface lo inet loopback
auto docker0
iface docker0 inet static
address 172.17.0.1
netmask 255.255.0.0
bridge_ports eth0
bridge_stp off
bridge_fd 0
auto eth1
iface eth1 inet dhcp
修改完毕之后,建议重启服务器
2、为第一台主机添加容器主机IP段设置,添加DOCKER_OPTS="--fixed-cidr=172.17.1.1/24"
sm@controller:~$ cat /etc/default/docker
# Docker Upstart and SysVinit configuration file
# Customize location of Docker binary (especially for development testing).
#DOCKER="/usr/local/bin/docker"
# Use DOCKER_OPTS to modify the daemon startup options.
DOCKER_OPTS="--fixed-cidr=172.17.1.1/24"
# If you need Docker to use an HTTP proxy, it can also be specified here.
#export http_proxy="http://127.0.0.1:3128/"
# This is also a handy place to tweak where Docker's temporary files go.
#export TMPDIR="/mnt/bigdrive/docker-tmp"
#DOCKER_OPTS="-b=br-docker"
添加完毕之后重启docker服务
3、同样的方法,在第二台机器重复上述操作,注意将IP修改为172.17.0.2
另外,添加DOCKER_OPTS="--fixed-cidr=172.17.2.1/24",重启docker服务
简单描述一下,我们在第一台机器上看到docker0的IP是172.17.0.1/16,docker容器也就是从docker0所在网络中获取IP,而且在第一台机器上,将docker容器的IP范围限制在172.17.1.1/24网段。
同理,我们在第二台机器上看到docker0的IP是172.17.0.2/16,docker容器也就是从docker0所在网络中获取IP,而且在第二台机器上,将docker容器的IP范围限制在172.17.2.1/24网段。
测试host1
查看IP,确保能够连通host2的docker0IP
root@controller:~# ifconfig docker0
docker0 Link encap:Ethernet HWaddr 00:0c:29:d3:5a:fe
inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0
inet6 addr: fe80::20c:29ff:fed3:5afe/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:85 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:6599 (6.5 KB) TX bytes:648 (648.0 B)
root@controller:~# ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.561 ms
64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.700 ms
host1创建容器实例,IP为172.17.1.2
root@controller:~# docker run -it --name test1 ubuntu:14.04 /bin/bash
root@54fd72ea7832:/# ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:ac:11:01:02
inet addr:172.17.1.2 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:acff:fe11:102/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:7 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:568 (568.0 B) TX bytes:508 (508.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
host2查看docker0 IP信息
root@docker2:~# ifconfig docker0
docker0 Link encap:Ethernet HWaddr 00:0c:29:c0:73:8c
inet addr:172.17.0.2 Bcast:172.17.255.255 Mask:255.255.0.0
inet6 addr: fe80::20c:29ff:fec0:738c/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:3697 errors:0 dropped:0 overruns:0 frame:0
TX packets:38 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:327320 (327.3 KB) TX bytes:3420 (3.4 KB)
root@docker2:~# docker run -it --name test2 ubuntu:14.04 /bin/bash
root@4d049a6397b3:/# ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:ac:11:02:01
inet addr:172.17.2.1 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:acff:fe11:201/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:10 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:920 (920.0 B) TX bytes:508 (508.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
host2测试连接host1的容器172.17.1.2
root@docker2:~# docker run -it --name test2 ubuntu:14.04 /bin/bash
root@4d049a6397b3:/# ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:ac:11:02:01
inet addr:172.17.2.1 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:acff:fe11:201/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:10 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:920 (920.0 B) TX bytes:508 (508.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
root@4d049a6397b3:/# ping 172.17.1.2
PING 172.17.1.2 (172.17.1.2) 56(84) bytes of data.
64 bytes from 172.17.1.2: icmp_seq=1 ttl=64 time=0.500 ms
64 bytes from 172.17.1.2: icmp_seq=2 ttl=64 time=0.669 ms
64 bytes from 172.17.1.2: icmp_seq=3 ttl=64 time=0.599 ms
^C
--- 172.17.1.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.500/0.589/0.669/0.072 ms
更多推荐
所有评论(0)