高级网络配置

一、bond网络

Bond 网络

    Red Hat Enterprise Linux 允许管理员使用,bonding内核模块合称为通道绑定接口的特殊网络接口将多个网络接口绑定到一个通道。根据选择的绑定模式,通道绑定使两个或更多个网络接口作为一个网络接口,从而增加带宽和/提供冗余性。

 

选择linux以太网绑定模式

模式0(平衡轮循)-轮循策略,所有接口都采用轮循方式在所有Slave(从盘)中传输封包;任何Slave都可以接收。

模式1(主动备份)-容错。一次只能使用一个Slave接口,但是如果接口出现故障,另一个Slave将接替它。

模式3(广播)-容错。所有封包都通过所有Slave接口广播。

实验一:网卡模式一(主动备份)

1.为虚拟机添加一个网卡

2.[root@localhost ~]# nmcli connection delete eth0  #删除网卡的配置信息

3.[root@localhost ~]# nmcli connection add con-name bond0 ifname bond0 type bond mode active-backup ip4 172.25.254.126/24  #新建以一个bond网络

4.[root@localhost ~]# watch -n 1 cat /proc/net/bonding/bond0  #监控命令

5.[root@localhost ~]# ping 172.25.254.26

  PING 172.25.254.26 (172.25.254.26) 56(84) bytes of data.   

 #失败,因为还为设置网卡

6.[root@localhost ~]# nmcli connection add con-name eht0 ifname eth0 type bond-slave master bond0

7.[root@localhost ~]# nmcli connection add con-name eht1 ifname eth1 type bond-slave master bond0     #为bond设置两个网卡为其工作。

8.[root@localhost ~]# ping 172.25.254.26   #成功

PING 172.25.254.26 (172.25.254.26) 56(84) bytes of data.

64 bytes from 172.25.254.26: icmp_seq=1 ttl=64 time=0.099 ms

64 bytes from 172.25.254.26: icmp_seq=2 ttl=64 time=0.105 ms

64 bytes from 172.25.254.26: icmp_seq=3 ttl=64 time=0.126 ms

 














二、Team网络接口


Team 和 bond0 功能类似

Team 不需要手动加载相应内核模块

Team 有更强的扩展性

支持8块网卡,企业七之前没有这个功能

平衡轮叫比较机械

负载均衡是将任务平衡分配

[root@localhost Desktop]# watch -n 1 'teamdctl team0 stat'  #监控命令

[root@localhost Desktop]# nmcli connection add con-name team0 ifname team0 type team config '{"runner":{"name":"activebackup"}}' ip4 172.25.254.126/24   #添加一个team网络。

Connection 'team0' (825e8e72-f445-4cb5-aa68-7aa593d51497) successfully added.

[root@localhost Desktop]# ifconfig

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500

        ether 52:54:00:00:47:0a  txqueuelen 1000  (Ethernet)

        RX packets 79  bytes 5489 (5.3 KiB)

        RX errors 0  dropped 37  overruns 0  frame 0

        TX packets 442  bytes 21223 (20.7 KiB)

        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

 

eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500

        ether 52:54:00:48:56:5a  txqueuelen 1000  (Ethernet)

        RX packets 473  bytes 23963 (23.4 KiB)

        RX errors 0  dropped 423  overruns 0  frame 0

        TX packets 36  bytes 1624 (1.5 KiB)

        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

 

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536

        inet 127.0.0.1  netmask 255.0.0.0

        inet6 ::1  prefixlen 128  scopeid 0x10<host>

        loop  txqueuelen 0  (Local Loopback)

        RX packets 348  bytes 34336 (33.5 KiB)

        RX errors 0  dropped 0  overruns 0  frame 0

        TX packets 348  bytes 34336 (33.5 KiB)

        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

 

team0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500

        inet 172.25.254.126  netmask 255.255.255.0  broadcast 172.25.254.255

        ether aa:5a:48:1d:7e:65  txqueuelen 0  (Ethernet)

        RX packets 0  bytes 0 (0.0 B)

        RX errors 0  dropped 0  overruns 0  frame 0

        TX packets 0  bytes 0 (0.0 B)

        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@localhost Desktop]# nmcli connection add con-name eth0 ifname eth0 type  team-slave master team0

Connection 'eth0' (4c2668fa-f942-4c9a-83a0-7a0c776b1722) successfully added.   #添加eth0为team服务

[root@localhost Desktop]# nmcli connection add con-name eth1 ifname eth1 type  team-slave master team0

Connection 'eth1' (bc45902b-a912-4d96-92b0-cb827567e9d6) successfully added.#添加eth1为team服务
##监控##

Every 1.0s: teamdctl team0 stat                        Wed May 23 15:20:45 2018

 

setup:

  runner: activebackup

ports:

  eth0

    link watches:

      link summary: up

      instance[link_watch_0]:

        name: ethtool

        link: up

  eth1

    link watches:

      link summary: up

      instance[link_watch_0]:

        name: ethtool

        link: up

runner:

  active port: eth0









Logo

更多推荐