Kubernetes Multus-CNI
简介Multus CNI 作为K8S的CNI插件,支持同时添加多个网络接口到K8S环境中的POD。这样的部署方式有利用用户把管理网络和业务网络相互隔离,有效控制容器集群网络架构下图是Multus CNI配置pod网络接口的例子。图中显示了POD具有三个接口:eth0、net0和net1。eth0将kubernetes集群网络连接到kubernetes服务器/服务(例如kubernetes ap..
简介
Multus CNI 作为K8S的CNI插件,支持同时添加多个网络接口到K8S环境中的POD。这样的部署方式有利于用户把管理网络和业务网络相互隔离,有效控制容器集群网络架构
下图是Multus CNI配置pod网络接口的例子。图中显示了POD具有三个接口:eth0、net0和net1。eth0将kubernetes集群网络连接到kubernetes服务器/服务(例如kubernetes api服务器、kubelet等)。net0和net1是附加的网络附件,通过使用其他CNI插件(例如vlan/vxlan/ptp)连接到其他网络。
部署Multus CNI
部署K8S环境
kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.13.2 --pod-network-cidr=192.168.0.0/16
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
kubectl taint nodes --all node-role.kubernetes.io/master-
部署Multus CNI到K8S
下载multus代码
[root@develop k8s]# git clone https://github.com/intel/multus-cni.git
Cloning into 'multus-cni'...
remote: Enumerating objects: 26, done.
remote: Counting objects: 100% (26/26), done.
remote: Compressing objects: 100% (21/21), done.
remote: Total 13356 (delta 8), reused 16 (delta 5), pack-reused 13330
Receiving objects: 100% (13356/13356), 22.65 MiB | 238.00 KiB/s, done.
Resolving deltas: 100% (4584/4584), done.
[root@develop k8s]# cd multus-cni/
[root@develop multus-cni]# ls
build checkpoint CONTRIBUTING.md doc Dockerfile Dockerfile.openshift examples glide.lock glide.yaml images k8sclient LICENSE logging multus README.md testing test.sh types vendor
[root@develop multus-cni]#
进入image目录,部署Multus环境主要由“flannel-daemonset.yml multus-daemonset.yml ”这两个编排文件完成。flannel-daemonset.yaml部署flannel网络需要的基础组件。Multus在K8S环境中是作为CNI插件使用的,所以和部署一般的网络插件方式类似,由multus-daemonset.yml文件编排完成,主要动作包括权限配置,写入配置到K8S,运行提供CNI功能的容器
[root@develop images]# pwd
/data/k8s/multus-cni/images
[root@develop images]# ls
70-multus.conf entrypoint.sh flannel-daemonset.yml multus-crio-daemonset.yml multus-daemonset.yml README.md
[root@develop images]# kubectl get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-78d4cf999f-4dcjq 0/1 Pending 0 21m
kube-system coredns-78d4cf999f-76p5l 0/1 Pending 0 21m
kube-system etcd-develop 1/1 Running 0 20m
kube-system kube-apiserver-develop 1/1 Running 0 20m
kube-system kube-controller-manager-develop 1/1 Running 0 20m
kube-system kube-proxy-f7n6d 1/1 Running 0 21m
kube-system kube-scheduler-develop 1/1 Running 0 20m
[root@develop images]# cat {flannel-daemonset.yml,multus-daemonset.yml} | kubectl apply -f -
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
customresourcedefinition.apiextensions.k8s.io/network-attachment-definitions.k8s.cni.cncf.io created
clusterrole.rbac.authorization.k8s.io/multus created
clusterrolebinding.rbac.authorization.k8s.io/multus created
serviceaccount/multus created
configmap/multus-cni-config created
daemonset.extensions/kube-multus-ds-amd64 created
[root@develop images]# kubectl get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-78d4cf999f-4dcjq 1/1 Running 0 22m
kube-system coredns-78d4cf999f-76p5l 1/1 Running 0 22m
kube-system etcd-develop 1/1 Running 0 22m
kube-system kube-apiserver-develop 1/1 Running 0 22m
kube-system kube-controller-manager-develop 1/1 Running 0 22m
kube-system kube-flannel-ds-amd64-wlc5m 1/1 Running 0 80s
kube-system kube-multus-ds-amd64-f69xz 1/1 Running 0 80s
kube-system kube-proxy-f7n6d 1/1 Running 0 22m
kube-system kube-scheduler-develop 1/1 Running 0 22m
[root@develop images]#
K8S的coredns已经正常启动,说明flannel插件生效了。
部署Multus完成后需要确保"/etc/cni/net.d/"目录是否存在别的CNI插件配置文件,kubelet调用CNI插件是根据文件名字的大小顺序调用,当前目前只有“70-multus.conf ”这一个文件,所以可以正常使用
[root@develop images]# cat /etc/cni/net.d/
70-multus.conf multus.d/
[root@develop images]# cat /etc/cni/net.d/70-multus.conf
{
"name": "multus-cni-network",
"type": "multus",
"delegates": [
{
"type": "flannel",
"name": "flannel.1",
"delegate": {
"isDefaultGateway": true,
"hairpinMode": true
}
}
],
"kubeconfig": "/etc/cni/net.d/multus.d/multus.kubeconfig"
}
[root@develop images]#
接下来可以在K8S环境中配置额外的CNI插件,实现创建出的POD有多个网络接口。
NetworkAttachmentDefinition用户自定义网络资源对象,该对象描述如何将pod连接到对象引用的逻辑或物理网络。
创建macvlan CNI 插件文件
[root@develop k8s]# cat macvlan-conf-1.yaml
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: macvlan-conf-1
spec:
config: '{
"cniVersion": "0.3.0",
"type": "macvlan",
"master": "ens15f1",
"mode": "bridge",
"ipam": {
"type": "host-local",
"ranges": [
[ {
"subnet": "10.10.0.0/16",
"rangeStart": "10.10.1.20",
"rangeEnd": "10.10.3.50",
"gateway": "10.10.0.254"
} ]
]
}
}'
[root@develop k8s]# kubectl apply -f macvlan-conf-1.yaml
networkattachmentdefinition.k8s.cni.cncf.io/macvlan-conf-1 created
[root@develop k8s]#
可以使用以下命令查看NetworkAttachmentDefinition创建状态
[root@develop k8s]# kubectl get network-attachment-definitions
NAME AGE
macvlan-conf-1 48s
[root@develop k8s]# kubectl describe network-attachment-definitions macvlan-conf-1
Name: macvlan-conf-1
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"k8s.cni.cncf.io/v1","kind":"NetworkAttachmentDefinition","metadata":{"annotations":{},"name":"macvlan-conf-1","namespace":"...
API Version: k8s.cni.cncf.io/v1
Kind: NetworkAttachmentDefinition
Metadata:
Creation Timestamp: 2019-02-09T09:27:20Z
Generation: 1
Resource Version: 2371
Self Link: /apis/k8s.cni.cncf.io/v1/namespaces/default/network-attachment-definitions/macvlan-conf-1
UID: e68a99b7-2c4c-11e9-9baa-0024ecf14b1f
Spec:
Config: { "cniVersion": "0.3.0", "type": "macvlan", "master": "ens15f1", "mode": "bridge", "ipam": { "type": "host-local", "ranges": [ [ { "subnet": "10.10.0.0/16", "rangeStart": "10.10.1.20", "rangeEnd": "10.10.3.50", "gateway": "10.10.0.254" } ] ] } }
Events: <none>
[root@develop k8s]#
使用Multus CNI部署多接口POD
创建一个简单的POD,POD的YAML文件和一般的POD文件一样,除了多了一个元数据“annotations”,这个元数据的值告诉K8S,创建POD的时候需要加上macvlan-conf-1这个配置,这样就和之前配置的NetworkAttachmentDefinition资源对应上了,对方方式用的是资源名字
[root@develop k8s]# cat pod-case-01.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-case-01
annotations:
k8s.v1.cni.cncf.io/networks: macvlan-conf-1
spec:
containers:
- name: pod-case-01
image: docker.io/centos/tools:latest
command:
- /sbin/init
[root@develop k8s]#
部署POD,部署完成后查看POD接口,eth0为flannel插件提供,net1由macvlan提供
[root@develop k8s]# kubectl apply -f pod-case-01.yaml
pod/pod-case-01 created
[root@develop k8s]# kubectl get pod
NAME READY STATUS RESTARTS AGE
pod-case-01 1/1 Running 0 21s
[root@develop k8s]# kubectl exec pod-case-01 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
3: eth0@if110: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 0a:58:c0:a8:00:0e brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.0.14/24 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::2ccf:72ff:fe18:bf16/64 scope link tentative dadfailed
valid_lft forever preferred_lft forever
4: net1@if3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state LOWERLAYERDOWN group default
link/ether de:25:34:bb:33:0d brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.10.1.25/16 scope global net1
valid_lft forever preferred_lft forever
[root@develop k8s]# brctl show
bridge name bridge id STP enabled interfaces
br0 8000.000000000000 no
cni0 8000.0a58c0a80001 no veth8c34cca9
vethae2601ad
vethb19ed751
docker0 8000.024276d139c2 no
virbr0 8000.5254005a2c64 yes virbr0-nic
[root@develop k8s]# ip a | grep cni0
31: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
inet 192.168.0.1/24 scope global cni0
108: vethb19ed751@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default
109: veth8c34cca9@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default
110: vethae2601ad@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default
[root@develop k8s]#
更多推荐
所有评论(0)