一起来学k8s 19.NetworkPolicy
NetworkPolicyKubernetes提供了NetworkPolicy,支持按Namespace级别的网络隔离,flannel 是没有网络策略的,calico有网络策略,建议安装calico,或者canal(就是flannel+calico,flannel作为网络划分,calico作为网络策略)。环境192.168.48.101 master01192.168.48.201 nod...
NetworkPolicy
Kubernetes提供了NetworkPolicy,支持按Namespace级别的网络隔离,flannel 是没有网络策略的,calico有网络策略,建议安装calico,或者canal(就是flannel+calico,flannel作为网络划分,calico作为网络策略)。
环境
192.168.48.101 master01
192.168.48.201 node01
192.168.48.202 node02
calico
calico版本v3.6
官方安装地址
https://docs.projectcalico.org/v3.6/getting-started/kubernetes/installation/calico
下载yaml文件
[root@master01 ~]# wget https://docs.projectcalico.org/v3.6/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
--2019-05-02 12:52:13-- https://docs.projectcalico.org/v3.6/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
Resolving docs.projectcalico.org (docs.projectcalico.org)... 206.189.89.118, 2400:6180:0:d1::575:a001
Connecting to docs.projectcalico.org (docs.projectcalico.org)|206.189.89.118|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 19537 (19K) [application/x-yaml]
Saving to: ‘calico.yaml’
100%[===========================================================================================>] 19,537 20.2KB/s in 0.9s
2019-05-02 12:52:35 (20.2 KB/s) - ‘calico.yaml’ saved [19537/19537]
默认pod CIDR 192.168.0.0/16
如果POD_CIDR 想要修改,就要修改calico.yaml
POD_CIDR="<your-pod-cidr>" \
sed -i -e "s?192.168.0.0/16?$POD_CIDR?g" calico.yaml
这里改成10.244.0.0/16
POD_CIDR="10.244.0.0/16";sed -i -e "s?192.168.0.0/16?$POD_CIDR?g" calico.yaml
镜像文件
[root@master01 ~]# grep image calico.yaml
image: calico/cni:v3.6.1
image: calico/cni:v3.6.1
image: calico/node:v3.6.1
image: calico/kube-controllers:v3.6.1
镜像下载链接:https://pan.baidu.com/s/1QDbpc7Oln43J4KuTL8mAYg 提取码:bc5z
各节点加载镜像
docker load -i calico-3.6.1.tar.gz
卸载以前安装的flannel
[root@master01 ~]# rm -rf /etc/cni/net.d/*
[root@master01 ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
--2019-05-01 18:01:49-- https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.228.133
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.228.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 12306 (12K) [text/plain]
Saving to: ‘kube-flannel.yml’
100%[===========================================================================================>] 12,306 28.2KB/s in 0.4s
2019-05-01 18:01:53 (28.2 KB/s) - ‘kube-flannel.yml’ saved [12306/12306]
[root@master01 ~]# kubectl delete -f kube-flannel.yml
podsecuritypolicy.extensions "psp.flannel.unprivileged" deleted
clusterrole.rbac.authorization.k8s.io "flannel" deleted
clusterrolebinding.rbac.authorization.k8s.io "flannel" deleted
serviceaccount "flannel" deleted
configmap "kube-flannel-cfg" deleted
daemonset.extensions "kube-flannel-ds-amd64" deleted
daemonset.extensions "kube-flannel-ds-arm64" deleted
daemonset.extensions "kube-flannel-ds-arm" deleted
daemonset.extensions "kube-flannel-ds-ppc64le" deleted
daemonset.extensions "kube-flannel-ds-s390x" deleted
执行calico.yaml文件
[root@master01 ~]# kubectl apply -f calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.extensions/calico-node created
serviceaccount/calico-node created
deployment.extensions/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
[root@master01 ~]# kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-5cbcccc885-q4bxv 1/1 Running 0 20s 10.244.1.3 node01 <none> <none>
calico-node-862hx 1/1 Running 0 20s 192.168.48.101 master01 <none> <none>
calico-node-9xf9x 1/1 Running 0 20s 192.168.48.201 node01 <none> <none>
calico-node-gtgzg 1/1 Running 0 20s 192.168.48.202 node02 <none> <none>
coredns-fb8b8dccf-h6cqs 1/1 Running 1 24h 10.244.0.5 master01 <none> <none>
coredns-fb8b8dccf-kqn4g 1/1 Running 1 24h 10.244.0.4 master01 <none> <none>
etcd-master01 1/1 Running 2 24h 192.168.48.101 master01 <none> <none>
kube-apiserver-master01 1/1 Running 1 24h 192.168.48.101 master01 <none> <none>
kube-controller-manager-master01 1/1 Running 1 24h 192.168.48.101 master01 <none> <none>
kube-proxy-b6x8j 1/1 Running 0 24h 192.168.48.201 node01 <none> <none>
kube-proxy-c7xlr 1/1 Running 2 24h 192.168.48.101 master01 <none> <none>
kube-proxy-rkqtk 1/1 Running 0 24h 192.168.48.202 node02 <none> <none>
kube-scheduler-master01 1/1 Running 1 24h 192.168.48.101 master01 <none> <none>
各节点重启,再测试一下
[root@master01 ~]# ifconfig
cali674c6acdc74: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1440
inet6 fe80::ecee:eeff:feee:eeee prefixlen 64 scopeid 0x20<link>
ether ee:ee:ee:ee:ee:ee txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
calicd08627017f: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1440
inet6 fe80::ecee:eeff:feee:eeee prefixlen 64 scopeid 0x20<link>
ether ee:ee:ee:ee:ee:ee txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
ether 02:42:a6:97:e9:94 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.48.101 netmask 255.255.255.0 broadcast 192.168.48.255
inet6 fe80::1bf1:3e06:bf27:adfa prefixlen 64 scopeid 0x20<link>
inet6 fe80::4446:4f12:39dd:db1 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:31:54:e9 txqueuelen 1000 (Ethernet)
RX packets 6661 bytes 731918 (714.7 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 9616 bytes 5358367 (5.1 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 99284 bytes 22583863 (21.5 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 99284 bytes 22583863 (21.5 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
tunl0: flags=193<UP,RUNNING,NOARP> mtu 1440
inet 10.244.241.64 netmask 255.255.255.255
tunnel txqueuelen 1000 (IPIP Tunnel)
RX packets 5 bytes 569 (569.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 7 bytes 449 (449.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root@master01 ~]# ip route show
default via 192.168.48.2 dev ens33 proto static metric 100
10.244.140.64/26 via 192.168.48.202 dev tunl0 proto bird onlink
10.244.196.128/26 via 192.168.48.201 dev tunl0 proto bird onlink
blackhole 10.244.241.64/26 proto bird
10.244.241.65 dev cali674c6acdc74 scope link
10.244.241.66 dev calicd08627017f scope link
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
192.168.48.0/24 dev ens33 proto kernel scope link src 192.168.48.101 metric 100
[root@master01 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
demo-pod01 1/1 Running 1 6m54s 10.244.140.66 node02 <none> <none>
[root@master01 ~]# curl 10.244.140.66
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
canal
canal版本V3.6
官方安装地址
https://docs.projectcalico.org/v3.6/getting-started/kubernetes/installation/flannel
下载yaml文件
[root@master01 ~]# wget https://docs.projectcalico.org/v3.6/getting-started/kubernetes/installation/hosted/canal/canal.yaml
--2019-05-01 18:41:33-- https://docs.projectcalico.org/v3.6/getting-started/kubernetes/installation/hosted/canal/canal.yaml
Resolving docs.projectcalico.org (docs.projectcalico.org)... 178.128.115.5
Connecting to docs.projectcalico.org (docs.projectcalico.org)|178.128.115.5|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 15517 (15K) [application/x-yaml]
Saving to: ‘canal.yaml’
100%[===========================================================================================>] 15,517 55.8KB/s in 0.3s
2019-05-01 18:41:43 (55.8 KB/s) - ‘canal.yaml’ saved [15517/15517]
默认pod CIDR 10.244.0.0/16
如果POD_CIDR 想要修改,就要修改canal.yaml
POD_CIDR="<your-pod-cidr>" \
sed -i -e "s?10.244.0.0/16?$POD_CIDR?g" canal.yaml
镜像文件
[root@master01 ~]# grep image canal.yaml
image: calico/cni:v3.6.1
image: calico/node:v3.6.1
image: quay.io/coreos/flannel:v0.9.1
镜像下载 链接:https://pan.baidu.com/s/1v2kQakwl5z2jnk1baVAvUw 提取码:zm77
所有节点加载镜像
docker load -i canal-3.6.1.tar.gz
卸载以前安装的flannel
[root@master01 ~]# rm -rf /etc/cni/net.d/*
[root@master01 ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
--2019-05-01 18:01:49-- https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.228.133
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.228.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 12306 (12K) [text/plain]
Saving to: ‘kube-flannel.yml’
100%[===========================================================================================>] 12,306 28.2KB/s in 0.4s
2019-05-01 18:01:53 (28.2 KB/s) - ‘kube-flannel.yml’ saved [12306/12306]
[root@master01 ~]# kubectl delete -f kube-flannel.yml
podsecuritypolicy.extensions "psp.flannel.unprivileged" deleted
clusterrole.rbac.authorization.k8s.io "flannel" deleted
clusterrolebinding.rbac.authorization.k8s.io "flannel" deleted
serviceaccount "flannel" deleted
configmap "kube-flannel-cfg" deleted
daemonset.extensions "kube-flannel-ds-amd64" deleted
daemonset.extensions "kube-flannel-ds-arm64" deleted
daemonset.extensions "kube-flannel-ds-arm" deleted
daemonset.extensions "kube-flannel-ds-ppc64le" deleted
daemonset.extensions "kube-flannel-ds-s390x" deleted
执行canal.yaml文件
[root@master01 ~]# kubectl apply -f canal.yaml
configmap/canal-config created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/canal-flannel created
clusterrolebinding.rbac.authorization.k8s.io/canal-calico created
daemonset.extensions/canal created
serviceaccount/canal created
[root@master01 ~]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
canal-bwvjs 2/2 Running 0 36s
canal-hm9f2 2/2 Running 0 36s
canal-zbv46 2/2 Running 0 36s
coredns-fb8b8dccf-h6cqs 1/1 Running 7 7h1m
coredns-fb8b8dccf-kqn4g 1/1 Running 7 7h1m
etcd-master01 1/1 Running 7 7h
kube-apiserver-master01 1/1 Running 7 7h
kube-controller-manager-master01 1/1 Running 5 7h
kube-proxy-b6x8j 1/1 Running 1 6h58m
kube-proxy-c7xlr 1/1 Running 6 7h1m
kube-proxy-rkqtk 1/1 Running 1 6h58m
kube-scheduler-master01 1/1 Running 6 7h
networkpolicy
- ingress:入站流量,限制的是源地址,pod端口
- engress:出站流量,限制的是目的地址。目的端口
创建2个名称空间
[root@master01 ~]# kubectl create namespace dev
namespace/dev created
[root@master01 ~]# kubectl create namespace test
namespace/test created
[root@master01 ~]# kubectl get ns
NAME STATUS AGE
default Active 7h22m
dev Active 3m55s
kube-node-lease Active 7h22m
kube-public Active 7h22m
kube-system Active 7h22m
test Active 3m45s
dev,test名称空间创建pod
[root@master01 ~]# vim demo-dev.yaml
apiVersion: v1
kind: Pod
metadata:
name: demo-dev
namespace: dev
labels:
app: myapp
type: pod
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
ports:
- name: http
containerPort: 80
[root@master01 ~]# vim demo-test.yaml
apiVersion: v1
kind: Pod
metadata:
name: demo-test
namespace: test
labels:
app: myapp
type: pod
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
ports:
- name: http
containerPort: 80
[root@master01 ~]# kubectl apply -f demo-test.yaml
pod/demo-test created
[root@master01 ~]# kubectl apply -f demo-dev.yaml
pod/demo-dev created
[root@master01 ~]# kubectl get pod -n dev -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
demo-dev 1/1 Running 0 2m51s 10.244.2.2 node02 <none> <none>
[root@master01 ~]# kubectl get pod -n test -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
demo-test 1/1 Running 0 3m6s 10.244.1.2 node01 <none> <none>
禁止入站
编辑yaml文件,dev名称空间禁止入站
[root@master01 ~]# vim deny-all-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-ingress
spec:
podSelector: {}
policyTypes:
- Ingress
[root@master01 ~]# kubectl apply -f deny-all-ingress.yaml -n dev
networkpolicy.networking.k8s.io/deny-all-ingress created
[root@master01 ~]# kubectl get networkpolicies -n dev
NAME POD-SELECTOR AGE
deny-all-ingress <none> 5m25s
测试,发现流量不能入站
[root@master01 ~]# kubectl get pod -n dev -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
demo-dev 1/1 Running 0 2m51s 10.244.2.2 node02 <none> <none>
[root@master01 ~]# kubectl get pod -n test -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
demo-test 1/1 Running 0 3m6s 10.244.1.2 node01 <none> <none>
[root@master01 ~]# curl 10.244.2.2
^C
[root@master01 ~]# curl 10.244.1.2
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
允许入站
dev允许入站,编写yaml文件
[root@master01 ~]# vim allow-all-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all-ingress
spec:
podSelector: {}
ingress:
- {}
policyTypes:
- Ingress
[root@master01 ~]# kubectl apply -f allow-all-ingress.yaml -n dev
networkpolicy.networking.k8s.io/allow-all-ingress created
[root@master01 ~]# kubectl get networkpolicies -n dev
NAME POD-SELECTOR AGE
allow-all-ingress <none> 4s
[root@master01 ~]# kubectl get pod -n dev -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
demo-dev 1/1 Running 0 17m 10.244.2.2 node02 <none> <none>
[root@master01 ~]# kubectl get pod -n test -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
demo-test 1/1 Running 0 17m 10.244.1.2 node01 <none> <none>
[root@master01 ~]# curl 10.244.2.2
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@master01 ~]# curl 10.244.1.2
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
禁止单ip入站
禁止demo-test pod流量入站
[root@master01 ~]# vim allow-myapp.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all-myapp
spec:
podSelector:
matchLabels:
app: myapp
type: pod
ingress:
- from:
- ipBlock:
cidr: 10.244.0.0/16
except:
- 10.244.1.2/32
ports:
- protocol: TCP
port: 80
policyTypes:
- Ingress
[root@master01 ~]# kubectl apply -f allow-myapp.yaml -n dev
networkpolicy.networking.k8s.io/allow-all-myapp created
[root@master01 ~]# kubectl get networkpolicies -n dev
NAME POD-SELECTOR AGE
allow-all-myapp app=myapp,type=pod 6s
测试
[root@master01 ~]# kubectl get pod -n dev -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
demo-dev 1/1 Running 0 30m 10.244.2.2 node02 <none> <none>
[root@master01 ~]# kubectl get pod -n test -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
demo-test 1/1 Running 0 31m 10.244.1.2 node01 <none> <none>
[root@master01 ~]# curl 10.244.2.2
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@master01 ~]# kubectl exec -it -n test demo-test -- /bin/sh
/ # ping 10.244.2.2
PING 10.244.2.2 (10.244.2.2): 56 data bytes
禁止出站
禁止test流量出站
[root@master01 ~]# vim deny-all-egress.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-egress
spec:
podSelector: {}
policyTypes:
- Egress
[root@master01 ~]# kubectl apply -f deny-all-egress.yaml -n test
networkpolicy.networking.k8s.io/deny-all-egress created
[root@master01 ~]# kubectl get networkpolicies -n test
NAME POD-SELECTOR AGE
deny-all-egress <none> 18s
测试
[root@master01 ~]# kubectl get pod -n test -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
demo-test 1/1 Running 0 41m 10.244.1.2 node01 <none> <none>
[root@master01 ~]# kubectl get pod -n dev -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
demo-dev 1/1 Running 0 40m 10.244.2.2 node02 <none> <none>
[root@master01 ~]# curl 10.244.2.2
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@master01 ~]# kubectl exec -it -n test demo-test -- /bin/sh
/ # ping 10.244.2.2
PING 10.244.2.2 (10.244.2.2): 56 data bytes
允许出站
运行test流量出站
[root@master01 ~]# vim allow-all-egress.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all-egress
spec:
podSelector: {}
ingress:
- {}
policyTypes:
- Egress
[root@master01 ~]# kubectl apply -f allow-all-egress.yaml -n test
networkpolicy.networking.k8s.io/allow-all-egress created
[root@master01 ~]# kubectl get networkpolicies -n test
NAME POD-SELECTOR AGE
allow-all-egress <none> 11s
测试
[root@master01 ~]# kubectl get pod -n dev -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
demo-dev 1/1 Running 0 47m 10.244.2.2 node02 <none> <none>
[root@master01 ~]# kubectl get pod -n test -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
demo-test 1/1 Running 0 47m 10.244.1.2 node01 <none> <none>
[root@master01 ~]# kubectl exec -it -n test demo-test -- /bin/sh
/ # ping 10.244.2.2
PING 10.244.2.2 (10.244.2.2): 56 data bytes
64 bytes from 10.244.2.2: seq=0 ttl=62 time=0.837 ms
64 bytes from 10.244.2.2: seq=1 ttl=62 time=1.043 ms
64 bytes from 10.244.2.2: seq=2 ttl=62 time=0.290 ms
更多推荐
所有评论(0)