k8s--Pod管理(资源控制、重启策略与探针)
文章目录一、pod的资源控制二、Pod重启策略pod的健康检查–探针(Probe)使用exec方式检查使用httpGet方式检查使用tcpSocket方式检查一、pod的资源控制Docker中我们可以对容器进行资源控制,在k8s中当然也有对pod资源进行控制官网中对pod资源控制的描述https://kubernetes.io/docs/concepts/configuration/manage-
文章目录
一、pod的资源控制
Docker中我们可以对容器进行资源控制,在k8s中当然也有对pod资源进行控制
官网中对pod资源控制的描述
https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
我们可以在yaml中进行限制:
Pod的每个容器可以指定以下一项或多项:
'resources表示资源限制字段'
'requests表示基本资源'
'limits表示资源上限,即这个pod最大能用到多少资源'
spec.containers[].resources.limits.cpu 'CPU上限'
spec.containers[].resources.limits.memory '内存上限'
spec.containers[].resources.requests.cpu '创建时分配的基本CPU资源'
spec.containers[].resources.requests.memory '创建时分配的基本内存资源'
尽管只能在单个容器上指定请求和限制,但是谈论Pod资源请求和限制很方便。特定资源类型的 Pod资源请求/限制是Pod中每个Container的该类型资源请求/限制的总和。
编写yaml文件
[root@master test]# vim demo01.yaml
apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
containers:
- name: db
image: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: "password"
resources:
requests:
memory: "64Mi" '基础内存为64M'
cpu: "250m" '基础cpu使用为25%'
limits:
memory: "1024Mi" '这个容器内存上限为1024M'
cpu: "500m" '这个容器cpu上限为50%'
- name: wp
image: wordpress
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
使用yaml文件创建pod资源
[root@master test]# kubectl create -f demo01.yaml
pod/frontend created
[root@master test]# kubectl get pods
NAME READY STATUS RESTARTS AGE
frontend 2/2 Running 0 38s
[root@master test]# kubectl describe pod frontend '查看创建的pod的详细信息'
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 86s default-scheduler Successfully assigned default/frontend to 192.168.179.123
Normal Pulling 85s kubelet, 192.168.179.123 pulling image "mysql"
Normal Pulled 75s kubelet, 192.168.179.123 Successfully pulled image "mysql"
Normal Created 75s kubelet, 192.168.179.123 Created container
Normal Started 75s kubelet, 192.168.179.123 Started container
Normal Pulling 75s kubelet, 192.168.179.123 pulling image "wordpress"
Normal Pulled 60s kubelet, 192.168.179.123 Successfully pulled image "wordpress"
Normal Created 60s kubelet, 192.168.179.123 Created container
Normal Started 60s kubelet, 192.168.179.123 Started container
'容器创建在123节点'
查看node节点资源分配是否与yaml中一致
[root@master test]# kubectl describe nodes 192.168.179.123
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
default frontend 500m (12%) 1 (25%) 128Mi (7%) 1152Mi (66%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 500m (12%) 1 (25%)
memory 128Mi (7%) 1152Mi (66%)
二、Pod重启策略
pod的重启策略restartpolicy,在pod遇到故障之后的重启的动作称为重启策略
- Always:当容器终止退出之后,总是总是重启容器,为默认策略
- OnFailure:当容器异常退出之后(退出状态码为非0)时,重启容器
- Never:当容器终止退出,从不重启容器
注意:k8s中不支持重启pod资源,这里说的重启指的是删除重建pod
2.1 查看现有pod资源的重启策略
方法一:使用kubectl edit命令查看
[root@master test]# kubectl edit pod frontend
restartPolicy: Always '可以看到重启策略是Always,yaml文件中不指定重启策略默认就是always'
方法二:将pod资源导出成yaml文件查看
kubectl get pod名称 --export -o yaml文件名称
2.2 创建资源,测试重启策略
编写一个yaml文件
[root@master test]vim demo02.yaml
apiVersion: v1
kind: Pod
metadata:
name: foo
spec:
containers:
- name: busybox
image: busybox 'Linux的最小化产品(测试用)'
args:
- /bin/sh
- -c
- sleep 30; exit 3
创建pod,查看重启状态
[root@master test]# kubectl create -f demo02.yaml
[root@master test]# kubectl get pods '重启次数一直在增加'
NAME READY STATUS RESTARTS AGE
foo 0/1 running 3 4m4s
重新修改pod3-test.yaml文件的重启策略
[root@master test]# kubectl delete -f demo02.yaml
[root@master test]# vim demo02.yaml
apiVersion: v1
kind: Pod
metadata:
name: foo
spec:
containers:
- name: busybox
image: busybox
args:
- /bin/sh
- -c
- sleep 30; exit 3
restartPolicy: Never '添加重启策略为Never'
重新创建pod资源,查看重启状态
[root@master test]# kubectl create -f demo02.yaml
pod/foo created
[root@master test]# kubectl get pods '因为返回的是状态码3,所以显示的是error,如果删除这个异常状态码,那么显示的是completed'
NAME READY STATUS RESTARTS AGE
foo 0/1 Error 0 35s
三、Pod的健康检查–探针(Probe)
pod的健康检查又被称为探针,来检查pod资源,探针的规则可以同时定义
探针的类型分为两类:
1、亲和性探针(LivenessProbe)
判断容器是否存活(running),若不健康,则kubelet杀死该容器,根据Pod的restartPolicy来操作。
若容器中不包含此探针,则kubelet人为该容器的亲和性探针返回值永远是success
2、就绪性探针(ReadinessProbe)
判断容器服务是否就绪(ready),若不健康,kubernetes会把Pod从service endpoints中剔除,后续在把恢复到Ready状态的Pod加回后端的Endpoint列表。这样就能保证客户端在访问service时不会转发到服务不可用的pod实例上
endpoint是service负载均衡集群列表,添加pod资源的地址
探针有三种检查方式:亲和性探针和就绪型探针都可以配置这三种检查方式
- exec(最常用):执行shell命令返回状态码为0代表成功,exec检查后面所有pod资源,触发策略就执行
- httpGet:发送http请求,返回200-400范围状态码为成功
- tcpSocket :发起TCP Socket建立成功
3.1 使用exec方式检查
编辑yaml文件,kubelet 在容器内执行命令 cat /tmp/healthy 来进行探测。若命令返回值为 0,kubelet 就会认为这个容器是健康存活的。否则,kubelet 会杀死这个容器并重新启动它。
[root@master test]# vim demo03.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-exec
spec:
containers:
- name: liveness
image: busybox
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy;sleep 30
livenessProbe:
exec: '执行存活的exec探针策略'
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5 '容器启动5秒后开始探测'
periodSeconds: 5 '每5秒探测一次'
使用yaml文件创建pod,查看pod详细事件信息
[root@master test]# kubectl describe pod liveness-exec
[root@master test]# kubectl get pods
NAME READY STATUS RESTARTS AGE
liveness-exec 1/1 Running 2 3m14s
[root@master test]# kubectl describe pod liveness-exec
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m15s default-scheduler Successfully assigned default/liveness-exec to 192.168.179.123
Normal Pulled 68s (x2 over 2m10s) kubelet, 192.168.179.123 Successfully pulled image "busybox"
Normal Created 68s (x2 over 2m10s) kubelet, 192.168.179.123 Created container
Normal Started 68s (x2 over 2m9s) kubelet, 192.168.179.123 Started container
Warning Unhealthy 24s (x6 over 99s) kubelet, 192.168.179.123 Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory
3.1 使用httpGet方式检查
在配置文件中,可以看到Pod具有单个Container。该periodseconds字段指定kubelet应该每3秒执行一次活动性探测。该initialDelaySeconds字段告诉lubelet在执行第一个探测之前应等待3秒。为了执行探测,kubelet将HTTP GET请求发送到在Container中运行并在端口80上侦听的服务器。如果服务器/healthz路径的处理程序返回成功代码,则kucbelet会认为Container处于活动状态且运行状况良好。如果处理程序返回失败代码,则kubelet将杀死Container并重新启动它。
任何大于或等于200且小于400的代码均表示成功。其他任何代码都表示失败。
httpGet探测方式有如下可选的控制字段:
- host:要连接的主机名,默认为Pod IP,可以在http request head中设置host头部。
- scheme: 用于连接host的协议,默认为HTTP。
- path:http服务器上的访问URI。
- httpHeaders:自定义HTTP请求headers,HTTP允许重复headers。
- port: 容器上要访问端口号或名称。
[root@master test]# vim demo04.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-http
spec:
containers:
- name: nginx
image: nginx
livenessProbe:
httpGet: '指定探针方式'
path: /healthz
port: 80
httpHeaders:
- name: Custom-Header
value: Awesome
initialDelaySeconds: 3
periodSeconds: 3
[root@master test]# kubectl create -f demo04.yaml
pod/liveness-http created
[root@master test]# kubectl get pods
NAME READY STATUS RESTARTS AGE
liveness-http 1/1 Running 3 73s
[root@master test]# kubectl describe pod livness-http
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m15s default-scheduler Successfully assigned default/liveness-http to 192.168.179.123
Normal Pulled 82s (x3 over 2m4s) kubelet, 192.168.179.123 Successfully pulled image "nginx"
Normal Created 82s (x3 over 2m4s) kubelet, 192.168.179.123 Created container
Normal Started 82s (x3 over 2m4s) kubelet, 192.168.179.123 Started container
Normal Pulling 73s (x4 over 2m15s) kubelet, 192.168.179.123 pulling image "nginx"
Warning Unhealthy 73s (x9 over 2m1s) kubelet, 192.168.179.123 Liveness probe failed: HTTP probe failed with statuscode: 404
'页面返回码是404,是错误状态,进行重启'
默认这个首页index.html是可以访问到的
[root@master test]# vim demo04.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-nginx
spec:
containers:
- name: nginx
image: nginx
livenessProbe:
httpGet:
path: /index.html '修改默认首页的地址'
port: 80
httpHeaders:
- name: Custom-Header
value: Awesome
initialDelaySeconds: 3
periodSeconds: 3
删除原有资源,创建新的资源,再次查看pod资源创建信息,pod正常启动,不会重启
[root@master test]# kubectl get pods
NAME READY STATUS RESTARTS AGE
liveness-nginx 1/1 Running 0 39s
[root@master test]# kubectl describe pod liveness-nginx
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 72s default-scheduler Successfully assigned default/liveness-nginx to 192.168.179.123
Normal Pulling 71s kubelet, 192.168.179.123 pulling image "nginx"
Normal Pulled 38s kubelet, 192.168.179.123 Successfully pulled image "nginx"
Normal Created 38s kubelet, 192.168.179.123 Created container
Normal Started 38s kubelet, 192.168.179.123 Started container
3.1 使用tcpSocket方式检查
通过配置,kubelet 会尝试在指定端口和容器建立套接字链接。如果可以建立连接,这个容器就被看作是健康的,否则就被看作是有问题的
[root@master test]# vim demo05.yaml
apiVersion: v1
kind: Pod
metadata:
name: liveness-tcp
labels:
app: liveness-tcp
spec:
containers:
- name: liveness-tcp
image: nginx
ports:
- containerPort: 80
readinessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 15
periodSeconds: 20
[root@master test]# kubectl get pods -w
NAME READY STATUS RESTARTS AGE
liveness-tcp 0/1 ContainerCreating 0 9s
liveness-tcp 0/1 Running 0 16s
liveness-tcp 1/1 Running 0 30s
TCP检查方式和HTTP检查方式非常相似,示例中两种探针都使用了,在容器启动5秒后,kubelet将发送第一个readinessProbe探针,这将连接到容器的80端口,如果探测成功,则该Pod将被标识为ready,10秒后,kubelet将进行第二次连接。
除此之后,此配置还包含了livenessProbe探针,在容器启动15秒后,kubelet将发送第一个livenessProbe探针,仍然尝试连接容器的80端口,如果连接失败则重启容器。
更多推荐
所有评论(0)