k8s nodename nodeselector deployment pod 测试 重启 运维
说明:master :132, node1:11 , node2:12主要测试以下几个方面:1. container指定到对应的node2. container的死掉后3. node重启后4. node不可用后5. node恢复后主要是对上面的情况进行测试,看看k8s是否会按照咱们想的那样保证咱们的应用1. container指定到对应的node
·
说明:
master :132, node1:11 , node2:12
主要测试以下几个方面:
1. container指定到对应的node
2. container的死掉后
3. node重启后
4. node不可用后
5. node恢复后
主要是对上面的情况进行测试,看看k8s是否会按照咱们想的那样保证咱们的应用
1. container指定到对应的node
可以通过nodename、nodeselector实现
为了更好的扩展和定制这里使用 了nodeselector。
deployment yaml文件
[root@yzb-centos72-3 imgcloud]# more dfs-data-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: dfs-data-depl
namespace: imgcloud
spec:
replicas: 1
template:
metadata:
labels:
k8s-app: dfs-data
spec:
nodeSelector:
imgcloud/app: dfs-data
containers:
- name: dfs-data
image: registry.cn-beijing.aliyuncs.com/zybros/env:imgcloud-dfs-data
imagePullPolicy: Always
ports:
- containerPort: 2101
hostPort: 2101
- containerPort: 3101
hostPort: 3101
在创建该deployment之前,先进行label设定,通过kubectl label 对node进行打标签。
创建deployment后查看pod状态
[root@yzb-centos72-3 imgcloud]# kubectl get pod --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
imgcloud dfs-data-depl-2521169640-62m79 1/1 Running 0 19m 172.17.73.2 10.3.14.11
访问11
2. container的死掉后
停止container
[root@k8s-node-11 k8s]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS
1a0e1704331a registry.access.redhat.com/rhel7/pod-infrastructure:latest "/pod" 45 seconds ago Up 43 seconds
[root@k8s-node-11 k8s]#
[root@k8s-node-11 k8s]# docker stop 3fcf58bbd812
3fcf58bbd812
[root@k8s-node-11 k8s]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS
fed8ef3177c6 registry.cn-beijing.aliyuncs.com/zybros/env:imgcloud-dfs-data "/usr/sbin/init" 8 seconds ago Up 7 seconds
1a0e1704331a registry.access.redhat.com/rhel7/pod-infrastructure:latest "/pod" 18 minutes ago Up 18 minutes
[root@k8s-node-11 k8s]#
可以看到 很快又重启了一个container,复合我们的要求
3. node重启后
reboot node1 后,
[root@k8s-node-11 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
[root@k8s-node-11 ~]#
[root@k8s-node-11 ~]#
[root@k8s-node-11 ~]#
[root@k8s-node-11 ~]# netstat -lnpt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 2074/sshd
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 2317/kubelet
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 751/kube-proxy
tcp6 0 0 :::10250 :::* LISTEN 2317/kubelet
tcp6 0 0 :::10255 :::* LISTEN 2317/kubelet
tcp6 0 0 :::2101 :::* LISTEN 2523/docker-proxy-c
tcp6 0 0 :::3101 :::* LISTEN 2514/docker-proxy-c
tcp6 0 0 :::4194 :::* LISTEN 2317/kubelet
[root@k8s-node-11 ~]#
[root@k8s-node-11 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ed9be07a17f6 registry.cn-beijing.aliyuncs.com/zybros/env:imgcloud-dfs-data "/usr/sbin/init" 13 seconds ago Up 12 seconds k8s_dfs-data.a55c69cf_dfs-data-depl-2521169640-62m79_imgcloud_dd8bf189-5af9-11e7-a7aa-0669a40010d2_368201ac
5407f90223f6 registry.access.redhat.com/rhel7/pod-infrastructure:latest "/pod" 15 seconds ago Up 14 seconds 0.0.0.0:2101->2101/tcp, 0.0.0.0:3101->3101/tcp k8s_POD.92d92fce_dfs-data-depl-2521169640-62m79_imgcloud_dd8bf189-5af9-11e7-a7aa-0669a40010d2_b6a7a804
[root@k8s-node-11 ~]#
可以看到在node1,重启的这段时间里,node2并没有新建我们的pod,也就是说这段时间app是无法访问的。node1启动后很快就启动了咱们的服务。
4. node不可用后
我将node1的flannel关掉,然后查看node、demploy、pod
[root@yzb-centos72-3 imgcloud]# kubectl get node
NAME STATUS AGE
10.3.14.11 Ready 1h
10.3.14.12 Ready 1h
172.20.4.133 Ready 3d
[root@yzb-centos72-3 imgcloud]#
[root@yzb-centos72-3 imgcloud]# kubectl get node
NAME STATUS AGE
10.3.14.11 NotReady 1h
10.3.14.12 Ready 1h
172.20.4.133 Ready 3d
[root@yzb-centos72-3 imgcloud]# kubectl get deployment --all-namespaces
NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
imgcloud dfs-data-depl 1 1 1 0 51m
[root@yzb-centos72-3 imgcloud]# kubectl get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
imgcloud dfs-data-depl-2521169640-62m79 1/1 Running 2 33m
[root@yzb-centos72-3 imgcloud]#
node状态变成了notready, deployment 的avaliable变成了0. 但是pod没有变化
通过describe查看pod状态,发现过了得一会了(没计时,感觉可能2分钟左右吧),controllermanager 起作用
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
36m 33m 14 {default-scheduler } Warning FailedScheduling pod (dfs-data-depl-2521169640-62m79) failed to fit in any node
fit failure summary on nodes : MatchNodeSelector (1)
33m 33m 1 {default-scheduler } Normal Scheduled Successfully assigned dfs-data-depl-2521169640-62m79 to 10.3.14.11
33m 33m 1 {kubelet 10.3.14.11} spec.containers{dfs-data} Normal Created Created container with docker id 3fcf58bbd812; Security:[seccomp=unconfined]
33m 33m 1 {kubelet 10.3.14.11} spec.containers{dfs-data} Normal Started Started container with docker id 3fcf58bbd812
33m 14m 2 {kubelet 10.3.14.11} spec.containers{dfs-data} Normal Pulling pulling image "registry.cn-beijing.aliyuncs.com/zybros/env:imgcloud-dfs-data"
33m 14m 2 {kubelet 10.3.14.11} spec.containers{dfs-data} Normal Pulled Successfully pulled image "registry.cn-beijing.aliyuncs.com/zybros/env:imgcloud-dfs-data"
14m 14m 1 {kubelet 10.3.14.11} spec.containers{dfs-data} Normal Started Started container with docker id fed8ef3177c6
14m 14m 1 {kubelet 10.3.14.11} spec.containers{dfs-data} Normal Created Created container with docker id fed8ef3177c6; Security:[seccomp=unconfined]
10m 10m 1 {kubelet 10.3.14.11} spec.containers{dfs-data} Normal Pulling pulling image "registry.cn-beijing.aliyuncs.com/zybros/env:imgcloud-dfs-data"
10m 10m 1 {kubelet 10.3.14.11} spec.containers{dfs-data} Normal Pulled Successfully pulled image "registry.cn-beijing.aliyuncs.com/zybros/env:imgcloud-dfs-data"
10m 10m 1 {kubelet 10.3.14.11} spec.containers{dfs-data} Normal Created Created container with docker id ed9be07a17f6; Security:[seccomp=unconfined]
10m 10m 1 {kubelet 10.3.14.11} spec.containers{dfs-data} Normal Started Started container with docker id ed9be07a17f6
21s 21s 1 {controllermanager } Normal NodeControllerEviction Marking for deletion Pod dfs-data-depl-2521169640-62m79 from Node 10.3.14.11
在12上启动了container。
[root@iz2ze0fq2isg8vphkpos5tz ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
35701d1242b9 registry.cn-beijing.aliyuncs.com/zybros/env:imgcloud-dfs-data "/usr/sbin/init" 39 seconds ago Up 37 seconds k8s_dfs-data.a55c69cf_dfs-data-depl-2521169640-s62vc_imgcloud_e2d678e9-5afe-11e7-a7aa-0669a40010d2_6463e4cb
2b48e0f43df9 registry.access.redhat.com/rhel7/pod-infrastructure:latest "/pod" 40 seconds ago Up 38 seconds 0.0.0.0:2101->2101/tcp, 0.0.0.0:3101->3101/tcp k8s_POD.92d92fce_dfs-data-depl-2521169640-s62vc_imgcloud_e2d678e9-5afe-11e7-a7aa-0669a40010d2_dd97ae88
5. node恢复后
[root@yzb-centos72-3 imgcloud]# kubectl get node
NAME STATUS AGE
10.3.14.11 Ready 1h
10.3.14.12 Ready 1h
将app scale 到 2
[root@yzb-centos72-3 imgcloud]# kubectl get deployment --all-namespaces
NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
imgcloud dfs-data-depl 1 1 1 1 1h
[root@yzb-centos72-3 imgcloud]# kubectl get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
imgcloud dfs-data-depl-2521169640-s62vc 1/1 Running 0 11m
[root@yzb-centos72-3 imgcloud]#
[root@yzb-centos72-3 imgcloud]# kubectl scale deployment dfs-data-depl -n imgcloud --replicas=2
deployment "dfs-data-depl" scaled
[root@yzb-centos72-3 imgcloud]# kubectl get deployment --all-namespaces
NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
imgcloud dfs-data-depl 2 2 2 2 1h
[root@yzb-centos72-3 imgcloud]# kubectl get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
imgcloud dfs-data-depl-2521169640-ggptp 1/1 Running 0 10s
imgcloud dfs-data-depl-2521169640-s62vc 1/1 Running 0 12m
[root@yzb-centos72-3 imgcloud]# kubectl get pod --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
imgcloud dfs-data-depl-2521169640-ggptp 1/1 Running 0 17s 172.17.73.2 10.3.14.11
imgcloud dfs-data-depl-2521169640-s62vc 1/1 Running 0 12m 172.17.97.2 10.3.14.12
在master etcd 高可用的前提下, node 电、网 正常下,基本算是无人值守,自动修复
更多推荐
已为社区贡献2条内容
所有评论(0)