啃k8s之Pod高级管理(资源管理、重启策略、探针)
啃k8s之Pod高级管理(资源管理、重启策略、探针)一:Pod高级管理1.1:pod的资源控制1.2:Pod的重启策略1.2.1:查看现有pod资源的重启策略1.2.2:创建资源,测试重启策略1.3:pod的健康检查–探针(Probe)1.3.1:探针的分类1.3.2:探针的三种检查方式1.3.3:使用exec方式检查1.3.4:使用httpGet方式检查1.3.5:使用tcpSocket方式检查
·
啃k8s之Pod高级管理(资源管理、重启策略、探针)
一:Pod高级管理
1.1:pod的资源控制
- Docker中我们可以对容器进行资源控制,在k8s中当然也有对pod资源进行控制
- 我们可以在yaml中进行资源限制:如下
Pod的每个容器可以指定以下一项或多项:
'resources表示资源限制字段'
'requests表示基本资源'
'limits表示资源上限,即这个pod最大能用到多少资源'
spec.containers[].resources.limits.cpu 'CPU上限'
spec.containers[].resources.limits.memory '内存上限'
spec.containers[].resources.requests.cpu '创建时分配的基本CPU资源'
spec.containers[].resources.requests.memory '创建时分配的基本内存资源'
尽管只能在单个容器上指定请求和限制,但是进行Pod资源请求和限制很方便。特定资源类型的 Pod资源请求/限制是Pod中每个Container的该类型资源请求/限制的总和。
- 编写yaml文件
[root@master test]# vim pod2.yaml
apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
containers:
- name: db '容器1'
image: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: "password"
resources:
requests:
memory: "64Mi" '基础内存为64M'
cpu: "250m" '基础cpu使用为25%'
limits:
memory: "128Mi" '这个容器内存上限为128M'
cpu: "500m" '这个容器cpu上限为50%'
- name: wp '容器2'
image: wordpress
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
- 使用yaml文件创建pod资源
[root@master test]# kubectl create -f pod2.yaml
pod/frontend created
[root@master test]# kubectl get pod
NAME READY STATUS RESTARTS AGE
frontend 0/2 ContainerCreating 0 47s
[root@master test]# kubectl describe pod frontend '//详细查看pod信息'
...省略内容
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 116s default-scheduler Successfully assigned default/frontend to 20.0.0.56
Normal Pulled 61s kubelet, 20.0.0.56 Successfully pulled image "mysql"
Normal Created 60s kubelet, 20.0.0.56 Created container
Normal Started 60s kubelet, 20.0.0.56 Started container
Normal Pulling 60s kubelet, 20.0.0.56 pulling image "wordpress"
Normal Created 14s kubelet, 20.0.0.56 Created container
Normal Pulled 14s kubelet, 20.0.0.56 Successfully pulled image "wordpress"
Normal Pulling 13s (x2 over 115s) kubelet, 20.0.0.56 pulling image "mysql"
Normal Started 13s kubelet, 20.0.0.56 Started container
'发现容器是在node02节点创建的'
- node节点查看容器
[root@node02 ~]# docker ps -a |grep wp
a1491d7059f9 wordpress "docker-entrypoint.s…" 3 minutes ago Up 3 minutes k8s_wp_frontend_default_9cfe5bb5-96ab-11ea-8c4f-000c294b2dd3_0
[root@node02 ~]# docker ps -a |grep mysql
b51aa2ca3c74 mysql "docker-entrypoint.s…" 20 seconds ago Exited (137) 11 seconds ago k8s_db_frontend_default_9cfe5bb5-96ab-11ea-8c4f-000c294b2dd3_4
- 查看节点资源状态
[root@master test]# kubectl describe node 20.0.0.56
...省略内容
Non-terminated Pods: (4 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
default frontend 500m (12%) 1 (25%) 128Mi (7%) 256Mi (14%)
default my-nginx-69b8899fd6-glh6w 0 (0%) 0 (0%) 0 (0%) 0 (0%)
default nginx-test-d55b94fd-9zmdj 0 (0%) 0 (0%) 0 (0%) 0 (0%)
default nginx-test-d55b94fd-w4c5k 0 (0%) 0 (0%) 0 (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 500m (12%) 1 (25%)
memory 128Mi (7%) 256Mi (14%)
- 删除当前文件下的所有pod资源
kubectl delete -f .
- 查看命名空间
[root@master test]# kubectl get ns
NAME STATUS AGE
default Active 4d12h
kube-public Active 4d12h
kube-system Active 4d12h
- 部署好查看资源状态
[root@master test]# kubectl get pod '发现已经重启了五次' NAME READY STATUS RESTARTS AGE frontend 1/2 CrashLoopBackOff 5 10m
1.2:Pod的重启策略
- pod的重启策略restartpolicy,在pod遇到故障之后的重启的动作称为重启策略
- 1.Always:当容器终止退出之后,总是总是重启容器,为默认策略
- 2.OnFailure:当容器异常退出之后(退出状态码为非0)时,重启容器
- 3.Never:当容器终止退出,从不重启容器
注意:k8s中不支持重启pod资源,这里说的重启指的是删除重建pod
1.2.1:查看现有pod资源的重启策略
- 方法一:使用kubectl edit命令查看
[root@master test]# kubectl edit pod frontend
restartPolicy: Always '可以看到重启策略是Always,yaml文件中不指定重启策略默认就是always'
- 方法二:将pod资源导出成yaml文件查看
kubectl get pod名称 --export -o yaml文件名称
1.2.2:创建资源,测试重启策略
- 删除所有pod资源
[root@master test]# kubectl delete -f .
- 编写一个yaml文件
[root@master test]# vim pod3.yaml
apiVersion: v1
kind: Pod
metadata:
name: foo
spec:
containers:
- name: busybox
image: busybox
args:
- /bin/sh
- -c
- sleep 30; exit 3
- 创建pod,查看重启状态
[root@master test]# kubectl create -f pod3-test.yaml
pod/ceshichongqicelue created
[root@master test]# kubectl get pod -w '-w:动态查看'
NAME READY STATUS RESTARTS AGE
ceshichongqicelue 0/1 ContainerCreating 0 10s
ceshichongqicelue 1/1 Running 0 13s
ceshichongqicelue 0/1 Error 0 44s
ceshichongqicelue 1/1 Running 1 53s
^C[root@master test]# kubectl get pod '查看重启次数加1'
NAME READY STATUS RESTARTS AGE
ceshichongqicelue 1/1 Running 1 59s
- 重新修改pod3-test.yaml文件的重启策略
[root@master test]# vim pod3.yaml
apiVersion: v1
kind: Pod
metadata:
name: foo
spec:
containers:
- name: busybox
image: busybox
args:
- /bin/sh
- -c
- sleep 10;exit 3
restartPolicy: Never '添加重启策略为从不重启,跟container同一个级别'
- 重新创建pod资源,查看重启状态
[root@master test]# kubectl delete -f pod3.yaml
pod "ceshichongqicelue" deleted
[root@master test]# kubectl apply -f pod3.yaml
pod/ceshichongqicelue created
[root@master test]# kubectl get pod -w
NAME READY STATUS RESTARTS AGE
ceshichongqicelue 0/1 ContainerCreating 0 6s
ceshichongqicelue 1/1 Running 0 15s
ceshichongqicelue 0/1 Error 0 45s '因为返回的是状态码3,所以显示的是error,如果删除这个异常状态码,那么显示的是completed'
^C[root@master test]# kubectl get pod
NAME READY STATUS RESTARTS AGE
ceshichongqicelue 0/1 Error 0 67s
1.3:pod的健康检查–探针(Probe)
1.3.1:探针的分类
pod的健康检查又被称为探针,来检查pod资源,探针的规则可以同时定义
探针的类型分为两类:
1、亲和性探针(LivenessProbe)
- 判断容器是否存活(running),若不健康,则kubelet杀死该容器,根据Pod的restartPolicy来操作。
- 若容器中不包含此探针,则kubelet人为该容器的亲和性探针返回值永远是success
2、就绪性探针(ReadinessProbe)
- 判断容器服务是否就绪(ready),若不健康,kubernetes会把Pod从service endpoints中剔除,后续在把恢复到Ready状态的Pod加回后端的Endpoint列表。这样就能保证客户端在访问service’时不会转发到服务不可用的pod实例上
- endpoint是service负载均衡集群列表,添加pod资源的地址
1.3.2:探针的三种检查方式
探针有三种检查方式:亲和性探针和就绪型探针都可以配置这三种检查方式
1、exec(最常用):执行shell命令返回状态码为0代表成功,exec检查后面所有pod资源,触发策略就执行
2、httpGet:发送http请求,返回200-400范围状态码为成功
3、tcpSocket :发起TCP Socket建立成功
1.3.3:使用exec方式检查
- 编辑yaml文件
[root@master test]# vim pod4.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-exec
spec:
containers:
- name: liveness
image: busybox
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 300
livenessProbe: '探针类型'
exec: '执行存活的exec探针策略'
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5 '告诉kubelet在执行第一个探测之前应等待5秒'
periodSeconds: 5 'kubelet应该每5秒执行一次活动性探测'
'为了执行探测,kubelet 在容器内执行命令 cat /tmp/healthy 来进行探测。若命令返回值为 0,kubelet 就会认为这个容器是健康存活的。如果命令返回非零值,则kubelet会杀死这个容器并重新启动它。'
- 使用yaml文件创建pod
[root@master test]# kubectl create -f pod4.yaml
pod/liveness-exec created
[root@master test]# kubectl get pod -w
NAME READY STATUS RESTARTS AGE
foo 0/1 Error 0 108m
liveness-exec 1/1 Running 0 18s
liveness-exec 1/1 Running 1 92s '发现容器重启了一次'
[root@master test]# kubectl get pod
NAME READY STATUS RESTARTS AGE
foo 0/1 Error 0 147m
liveness-exec 1/1 Running 1 38m
- 查看pod详细事件信息
[root@master test]# kubectl describe pod liveness-exec
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 40m default-scheduler Successfully assigned default/liveness-exec to 20.0.0.56
Normal Created 37m (x3 over 40m) kubelet, 20.0.0.56 Created container
Normal Started 37m (x3 over 40m) kubelet, 20.0.0.56 Started container
Normal Pulling 36m (x4 over 40m) kubelet, 20.0.0.56 pulling image "busybox"
Normal Killing 36m (x3 over 39m) kubelet, 20.0.0.56 Killing container with id docker://liveness:Container failed liveness probe.. Container will be killed and recreated.
Normal Pulled 30m (x8 over 40m) kubelet, 20.0.0.56 Successfully pulled image "busybox"
Warning Unhealthy 14s (x46 over 39m) kubelet, 20.0.0.56 Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory
1.3.4:使用httpGet方式检查
- 编写yaml文件
[root@master test]# kubectl delete -f pod4.yaml '先删除之前pod资源'
pod "liveness-exec" deleted
[root@master test]# vim pod5.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-http
spec:
containers:
- name: nginx
image: nginx
livenessProbe:
httpGet: '指定探针方式'
path: /healthz
port: 80
httpHeaders:
- name: Custom-Header
value: Awesome
initialDelaySeconds: 3 '第一次探测前等待3秒'
periodSeconds: 3 '每隔3秒探测一次'
- 创建pod资源
[root@master test]# kubectl apply -f pod5.yaml
pod/liveness-http created
- 查看重启状态
[root@master test]# kubectl apply -f pod5.yaml
pod/liveness-http created
[root@master test]# kubectl get pod -w
NAME READY STATUS RESTARTS AGE
liveness-http 0/1 ContainerCreating 0 10s
liveness-http 1/1 Running 0 43s
^C[root@master test]# kubectl get pod
NAME READY STATUS RESTARTS AGE
liveness-http 1/1 Running 0 54s
[root@master test]# kubectl describe pod liveness-http
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 82s default-scheduler Successfully assigned default/liveness-http to 20.0.0.54
Normal Pulled 15s (x2 over 42s) kubelet, 20.0.0.54 Successfully pulled image "nginx"
Normal Created 15s (x2 over 42s) kubelet, 20.0.0.54 Created container
Normal Started 15s (x2 over 41s) kubelet, 20.0.0.54 Started container
Normal Pulling 4s (x3 over 81s) kubelet, 20.0.0.54 pulling image "nginx"
Warning Unhealthy 4s (x6 over 37s) kubelet, 20.0.0.54 Liveness probe failed: HTTP probe failed with statuscode: 404
'页面返回码是404,是错误状态,进行重启'
1.3.5:使用tcpSocket方式检查
- 编写yaml文件
[root@master test]# vim pod6.yaml
apiVersion: v1
kind: Pod
metadata:
name: liveness-tcp
labels:
app: liveness-tcp
spec:
containers:
- name: liveness-tcp
image: nginx
ports:
- containerPort: 80
readinessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 15
periodSeconds: 20
'通过配置,kubelet 会尝试在指定端口和容器建立套接字链接。如果可以建立连接,这个容器就被看作是健康的,否则就被看作是有问题的。'
- 创建pod资源
[root@master test]# kubectl delete -f .
pod "liveness-http" deleted
[root@master test]# kubectl apply -f pod6.yaml
pod/liveness-tcp created
- 查看重启状态
[root@master test]# kubectl get pod -w
NAME READY STATUS RESTARTS AGE
liveness-tcp 0/1 Running 0 9s
^C[root@master test]# kubectl get pod
NAME READY STATUS RESTARTS AGE
liveness-tcp 0/1 Running 0 18s
更多推荐
已为社区贡献4条内容
所有评论(0)