参考 TF中文社区 Wiki文档的4.2 章

同一个k8s namespace下不同网段的通信

  • 建议在开始前,先在所有node节点上将master的admin.conf拷贝过来并做如下操作
[root@node02 ~]# scp root@192.168.122.116:/etc/kubernetes/admin.conf /etc/kubernetes/admin.conf
root@192.168.122.116's password: 
admin.conf                                                                                                                                                                                                 100% 5455     3.2MB/s   00:00    
[root@node02 ~]# 
[root@node02 ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
[root@node02 ~]# source ~/.bash_profile

创建namespace

  • 本章只会用到test-ns1
[root@master02 ~]# kubectl create namespace test-ns1
namespace/test-ns1 created
[root@master02 ~]# kubectl create namespace test-ns2
namespace/test-ns2 created
[root@master02 ~]# 
[root@master02 ~]# kubectl get namespace
NAME          STATUS   AGE
contrail      Active   15h
default       Active   16h
kube-public   Active   16h
kube-system   Active   16h
test-ns1      Active   4s
test-ns2      Active   2s
[root@master02 ~]# 

新建两个IPAM

  • k8s-ns1-pod-ipam-01和k8s-ns1-pod-ipam-02
    • ipam-01->10.10.10.0/24,gw=10.10.10.254
    • ipam-02->10.10.20.0/24,gw=10.10.20.254
      ipam.png

新建虚拟网络

  • k8s-ns1-pod-net01使用k8s-ns1-pod-ipam-01
  • k8s-ns1-pod-net02使用k8s-ns1-pod-ipam-02

创建pod

  • 原文使用的nginx这个镜像,但是这个镜像里面连ip/ping这样的命令都没有!
  • 改用小巧的busybox,在deployer上pull然后再push到本地registry
[root@deployer ~]# docker pull busybox
Using default tag: latest
latest: Pulling from library/busybox
0669b0daf1fb: Pull complete 
Digest: sha256:b26cd013274a657b86e706210ddd5cc1f82f50155791199d29b9e86e935ce135
Status: Downloaded newer image for busybox:latest
[root@deployer ~]# 
[root@deployer ~]# docker image list | grep busybox
busybox                                                 latest              83aa35aa1c79        9 days ago          1.22MB
[root@deployer ~]# docker tag busybox:latest 192.168.122.160/busybox:latest
[root@deployer ~]# docker push 192.168.122.160/busybox:latest
The push refers to repository [192.168.122.160/busybox]
a6d503001157: Pushed 
latest: digest: sha256:afe605d272837ce1732f390966166c2afff5391208ddd57de10942748694049d size: 527
[root@deployer ~]# 
[root@deployer ~]# curl -XGET http://localhost:80/v2/_catalog
{"repositories":["busybox","contrail-analytics-alarm-gen","contrail-analytics-api","contrail-analytics-collector","contrail-analytics-query-engine","contrail-analytics-snmp-collector","contrail-analytics-snmp-topology","contrail-controller-config-api","contrail-controller-config-devicemgr","contrail-controller-config-dnsmasq","contrail-controller-config-schema","contrail-controller-config-stats","contrail-controller-config-svcmonitor","contrail-controller-control-control","contrail-controller-control-dns","contrail-controller-control-named","contrail-controller-webui-job","contrail-controller-webui-web","contrail-external-cassandra","contrail-external-kafka","contrail-external-rabbitmq","contrail-external-redis","contrail-external-rsyslogd","contrail-external-zookeeper","contrail-kubernetes-cni-init","contrail-kubernetes-kube-manager","contrail-node-init","contrail-nodemgr","contrail-status","contrail-vrouter-agent","contrail-vrouter-kernel-init","coredns","etcd","kube-apiserver","kube-controller-manager","kube-proxy","kube-scheduler","kubernetes-dashboard-amd64","nginx","pause"]}
[root@deployer ~]# 

[root@node01 ~]# docker pull 192.168.122.160/busybox

[root@node02 ~]# docker pull 192.168.122.160/busybox
  • 下面是yaml文件的示例
[root@master02 Dockerfile]# cat k8s-ns1-pod-net01.yml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    opencontrail.org/network: '{"domain":"default-domain","project":"k8s-test-ns1","name":"k8s-ns1-pod-net01"}'
  name: busybox01-ns1-net01
  labels:
    app: busybox-ns1
  namespace: test-ns1
spec:
  containers:
  - name: busybox
    image: k8s.gcr.io/busybox:latest
    imagePullPolicy: IfNotPresent
    command: 
    - sleep
    - "3600"
  restartPolicy: Always
[root@master02 Dockerfile]# 
  • k8s-ns1-pod-net02.yml只是将上文中的"k8s-ns1-pod-net01"都改为"k8s-ns1-pod-net02"即可
  • 在master节点启动
[root@master02 Dockerfile]# kubectl apply -f k8s-ns1-pod-net01.yml 
pod/busybox01-ns1-net01 created
[root@master02 Dockerfile]# 
[root@master02 Dockerfile]# kubectl apply -f k8s-ns1-pod-net02.yml 
pod/busybox01-ns1-net02 created
[root@master02 Dockerfile]# 
[root@master02 Dockerfile]# 
[root@master02 Dockerfile]# kubectl get pods -n test-ns1 -o wide
NAME                  READY   STATUS    RESTARTS   AGE   IP           NODE                    NOMINATED NODE
busybox01-ns1-net01   1/1     Running   0          19s   10.10.10.1   node03                  <none>
busybox01-ns1-net02   1/1     Running   0          15s   10.10.20.1   localhost.localdomain   <none>
[root@master02 Dockerfile]# 

验证联通性

  • 以busybox01-ns1-net01为例,可以ping通自己的网关,但是无法ping通net02
[root@master02 Dockerfile]#  kubectl exec -it -n test-ns1 busybox01-ns1-net01 -- ping 10.10.10.254 -c 2
PING 10.10.10.254 (10.10.10.254): 56 data bytes
64 bytes from 10.10.10.254: seq=0 ttl=64 time=1.354 ms
64 bytes from 10.10.10.254: seq=1 ttl=64 time=0.282 ms

--- 10.10.10.254 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.282/0.818/1.354 ms
[root@master02 Dockerfile]# ^C
[root@master02 Dockerfile]#  kubectl exec -it -n test-ns1 busybox01-ns1-net01 -- ping 10.10.20.1 -c 2
PING 10.10.20.1 (10.10.20.1): 56 data bytes
^C
--- 10.10.20.1 ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss
command terminated with exit code 1
[root@master02 Dockerfile]# 

添加vRouter

  • 添加vRouter打通net01和net02

vrouter.png

  • 这里有个地方会有疑问,就是为什么显示vRouter的接口IP是.3,而不是.254?

每个虚拟网络都有一个分配给它的默认网关地址,每个虚拟机或容器接口在初始化时获得的DHCP响应中接收该地址。当工作负载将封包发送到其子网外的地址时,它将为与网关IP的IP地址对应的MAC进行ARP,并且vRouter以其自己的MAC地址进行响应。因此,vRouters支持所有虚拟网络的完全分布式默认网关功能。

重新验证连通性

  • 现在两个容器之间可以ping通了
[root@master02 Dockerfile]#  kubectl exec -it -n test-ns1 busybox01-ns1-net01 -- ping 10.10.20.1 -c 2
PING 10.10.20.1 (10.10.20.1): 56 data bytes
64 bytes from 10.10.20.1: seq=0 ttl=63 time=2.014 ms
64 bytes from 10.10.20.1: seq=1 ttl=63 time=0.618 ms

--- 10.10.20.1 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.618/1.316/2.014 ms
[root@master02 Dockerfile]# 

分析报文

  • 注意到两个pod位于两个不同的node上,需要跨node通信
  • 在node的接口上抓包,发现UDP报文中封装的是MPLS报文
    pkt_undp_mpls.png
  • MPLS的label=34,与控制器分配的label一致
    mpls_label.png

vRouter

  • 默认vRouter的实施是基于内核模块实现的
  • 详细的报文处理流程参考这里
  • node上的vrouter代理进程
[root@node02 ~]# ps -ef | grep vrouter
root     28777 28734  0 3月18 pts/0   00:02:54 /usr/bin/python /usr/bin/contrail-nodemgr --nodetype=contrail-vrouter
root     30150  5657  0 13:53 pts/0    00:00:00 grep --color=auto vrouter
root     30500 30484  0 3月18 pts/0   00:00:00 /bin/bash /entrypoint.sh /usr/bin/contrail-vrouter-agent
root     31067 30500  1 3月18 pts/0   00:22:29 /usr/bin/contrail-vrouter-agent
[root@node02 ~]# 
  • 相关容器
[root@node02 ~]# docker ps | grep vrouter
baf71f4c302f        hub.juniper.net/contrail-vrouter-agent:1912-latest       "/entrypoint.sh /usr…"   25 hours ago        Up 25 hours                             vrouter_vrouter-agent_1
96d035865e8f        hub.juniper.net/contrail-nodemgr:1912-latest             "/entrypoint.sh /bin…"   25 hours ago        Up 25 hours                             vrouter_nodemgr_1
[root@node02 ~]# 
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐