容器探测(liveness/readiness probes)

容器探测用于检测容器中的应用实例是否正常工作,是保障业务可用性的一种传统机制。如果经过探测,实例的状态不符合预期,那么kubernetes就会把该问题实例" 摘除 ",不承担业务流量。kubernetes提供了两种探针来实现容器探测,分别是:

  • liveness probes:存活性探针,用于检测应用实例当前是否处于正常运行状态,如果不是,k8s会重启容器

  • readiness probes:就绪性探针,用于检测应用实例当前是否可以接收请求,如果不能,k8s不会转发流量

livenessProbe 决定是否重启容器,readinessProbe 决定是否将请求转发给容器。

上面两种探针目前均支持三种探测方式:

  • Exec命令:在容器内执行一次命令,如果命令执行的退出码为0,则认为程序正常,否则不正常
  livenessProbe:
    exec:
      command:
      - cat
      - /tmp/healthy
  1. TCPSocket:将会尝试访问一个用户容器的端口,如果能够建立这条连接,则认为程序正常,否则不正常
  livenessProbe:
    tcpSocket:
      port: 8080
  1. HTTPGet:调用容器内Web应用的URL,如果返回的状态码在200和399之间,则认为程序正常,否则不正常
  livenessProbe:
    httpGet:
      path: / #URI地址
      port: 80 #端口号
      host: 127.0.0.1 #主机地址
      scheme: HTTP #支持的协议,http或者https

下面以 liveness probes 为例,做几个演示:

方式一:Exec

# 创建pod-liveness-exec.yaml

apiVersion: v1
kind: Pod
metadata:
  name: pod-liveness-exec
  namespace: dev
spec:
  containers:
  - name: nginx
    image: nginx
    ports:
    - name: nginx-port
      containerPort: 80
    livenessProbe:
      exec: # 执行一个查看文件的命令
        command: ["/bin/cat","/tmp/hello.txt"]
# 创建 pod
[root@k8s-master pod-files]# kubectl create -f pod-liveness-exec.yaml
pod/pod-liveness-exec created

# 查看 pod 运行情况(关注 RESTARTS 字段)
[root@k8s-master pod-files]# kubectl get pods -n dev -o wide
NAME                READY   STATUS    RESTARTS   AGE   IP             NODE        NOMINATED NODE   READINESS GATES
pod-liveness-exec   1/1     Running   2          72s   10.244.0.103   k8s-node1   <none>           <none>

# RESTARTS 表示重启,可以看到这个 pod 虽然显示 running,但实际上一直有自动重启,那么是为什么呢?
这是因为 livenessProbe 是用来决定是否重启容器的,而我们写的 yaml 文件中,hello.txt 在容器中不存在的,所以执行 cat /tmp/hello.txt 的回显不是 0,命令执行失败,k8s 会认为这个容器的状态不正常,所以会一直循环重启...

上面的说法也可以通过日志验证,如下:
# kubectl describe pod pod-liveness-exec -n dev

Events:
  Type     Reason     Age                    From                Message
  ----     ------     ----                   ----                -------
  Normal   Scheduled  <unknown>              default-scheduler   Successfully assigned dev/pod-liveness-exec to k8s-node1
  Normal   Created    7m6s (x3 over 8m7s)    kubelet, k8s-node1  Created container nginx
  Normal   Started    7m6s (x3 over 8m7s)    kubelet, k8s-node1  Started container nginx
  Normal   Pulling    6m44s (x4 over 8m14s)  kubelet, k8s-node1  Pulling image "nginx"
  Warning  Unhealthy  6m44s (x9 over 8m4s)   kubelet, k8s-node1  Liveness probe failed: /bin/cat: /tmp/hello.txt: No such file or directory
  Normal   Killing    6m44s (x3 over 7m44s)  kubelet, k8s-node1  Container nginx failed liveness probe, will be restarted
  Normal   Pulled     3m10s (x7 over 8m7s)   kubelet, k8s-node1  Successfully pulled image "nginx"

# 观察上面的信息就会发现 nginx 容器启动之后就进行了健康检查
# 检查失败之后,容器被kill掉,然后尝试进行重启(这是重启策略的作用)

那么这种一直重启的情况该如何修复? 很简单,我们只需要将 cat 的内容换为一个存在的文件 或者 执行一个其他正常的命令 即可,如:

    livenessProbe:
      exec:
        command: ["/bin/ls","/tmp"]
# 查看 pod 信息也正常:
Events:
  Type    Reason     Age        From                Message
  ----    ------     ----       ----                -------
  Normal  Scheduled  <unknown>  default-scheduler   Successfully assigned dev/pod-liveness-exec to k8s-node1
  Normal  Pulling    83s        kubelet, k8s-node1  Pulling image "nginx"
  Normal  Pulled     73s        kubelet, k8s-node1  Successfully pulled image "nginx"
  Normal  Created    73s        kubelet, k8s-node1  Created container nginx
  Normal  Started    73s        kubelet, k8s-node1  Started container nginx

方式二:TCPSocket

# 创建 pod-liveness-tcpsocket.yaml

apiVersion: v1
kind: Pod
metadata:
  name: pod-liveness-tcpsocket
  namespace: dev
spec:
  containers:
  - name: nginx
    image: nginx
    ports:
    - name: nginx-port
      containerPort: 80
    livenessProbe:
      tcpSocket: # 尝试访问8080端口
        port: 8080
# 创建 pod
[root@k8s-master pod-files]# kubectl create -f pod-liveness-tcpsocket.yaml
pod/pod-liveness-tcpsocket created

# 查看 pod 信息
[root@k8s-master pod-files]# kubectl describe pods pod-liveness-tcpsocket -n dev
Events:
  Type     Reason     Age        From                Message
  ----     ------     ----       ----                -------
  Normal   Scheduled  <unknown>  default-scheduler   Successfully assigned dev/pod-liveness-tcpsocket to k8s-node1
  Normal   Pulling    26s        kubelet, k8s-node1  Pulling image "nginx"
  Normal   Pulled     15s        kubelet, k8s-node1  Successfully pulled image "nginx"
  Normal   Created    15s        kubelet, k8s-node1  Created container nginx
  Normal   Started    15s        kubelet, k8s-node1  Started container nginx
  Warning  Unhealthy  9s         kubelet, k8s-node1  Liveness probe failed: dial tcp 10.244.0.105:8080: connect: connection refused
  
# 观察上面的信息,发现尝试访问8080端口,但是失败了
# 稍等一会之后,再观察pod信息,就可以看到RESTARTS不再是0,而是一直增长
[root@k8s-master pod-files]# kubectl get pods pod-liveness-tcpsocket -n dev -o wide
NAME                     READY   STATUS    RESTARTS   AGE     IP             NODE        NOMINATED NODE   READINESS GATES
pod-liveness-tcpsocket   1/1     Running   3          2m19s   10.244.0.105   k8s-node1   <none>           <none>

# 修复:可以修改成一个可以访问的端口,比如 80,再试,结果就正常了......

方式三:HTTPGet

apiVersion: v1
kind: Pod
metadata:
  name: pod-liveness-httpget
  namespace: dev
spec:
  containers:
  - name: nginx
    image: nginx
    ports:
    - name: nginx-port
      containerPort: 80
    livenessProbe:
      httpGet:  # 其实就是访问http://127.0.0.1:80/hello  
        scheme: HTTP #支持的协议,http或者https
        port: 80 #端口号
        path: /hello #URI地址
# 创建 pod
[root@k8s-master pod-files]# kubectl create -f pod-liveness-httpget.yaml
pod/pod-liveness-httpget created

#  查看 pod 信息
Events:
  Type     Reason     Age               From                Message
  ----     ------     ----              ----                -------
  Normal   Scheduled  <unknown>         default-scheduler   Successfully assigned dev/pod-liveness-httpget to k8s-node1
  Normal   Pulling    36s               kubelet, k8s-node1  Pulling image "nginx"
  Normal   Pulled     26s               kubelet, k8s-node1  Successfully pulled image "nginx"
  Normal   Created    26s               kubelet, k8s-node1  Created container nginx
  Normal   Started    26s               kubelet, k8s-node1  Started container nginx
  Warning  Unhealthy  9s (x2 over 19s)  kubelet, k8s-node1  Liveness probe failed: HTTP probe failed with statuscode: 404

# 观察上面信息,尝试访问路径,但是未找到,出现404错误
# 稍等一会之后,再观察pod信息,就可以看到RESTARTS不再是0,而是一直增长
[root@k8s-master pod-files]# kubectl get pods pod-liveness-httpget -n dev -o wide
NAME                   READY   STATUS    RESTARTS   AGE   IP             NODE        NOMINATED NODE   READINESS GATES
pod-liveness-httpget   1/1     Running   2          94s   10.244.0.107   k8s-node1   <none>           <none>

# 修复:可以修改成一个可以访问的路径path,比如/,再试,结果就正常了......

补充:

至此,已经使用liveness Probe演示了三种探测方式,但是查看livenessProbe的子属性,会发现除了这三种方式,还有一些其他的配置,在这里一并解释下:

[root@k8s-master pod-files]# kubectl explain pod.spec.containers.livenessProbe
FIELDS:
   exec <Object>  
   tcpSocket    <Object>
   httpGet      <Object>
   initialDelaySeconds  <integer>  # 容器启动后等待多少秒执行第一次探测
   timeoutSeconds       <integer>  # 探测超时时间。默认1秒,最小1秒
   periodSeconds        <integer>  # 执行探测的频率。默认是10秒,最小1秒
   failureThreshold     <integer>  # 连续探测失败多少次才被认定为失败。默认是3。最小值是1
   successThreshold     <integer>  # 连续探测成功多少次才被认定为成功。默认是1

配置示例如下:

apiVersion: v1
kind: Pod
metadata:
  name: pod-liveness-httpget
  namespace: dev
spec:
  containers:
  - name: nginx
    image: nginx
    ports:
    - name: nginx-port
      containerPort: 80
    livenessProbe:
      httpGet:
        scheme: HTTP
        port: 80 
        path: /
      initialDelaySeconds: 30 # 容器启动后30s开始探测
      timeoutSeconds: 5 # 探测超时时间为5s
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐