1. Wrong Container Image / Invalid Registry Permissions

当pod状态为ErrImagePullImagePullBackOff时,通常是由于以下3个原因(在排查网络故障的前提下):

  • 镜像tag写错
  • 镜像不存在,或拉取的镜像仓库地址写错
  • 没有拉取镜像的权限(漏配了imagePullSecrets

2. Application Crashing after Launch

当看到pod出现CrashLoopBackOff状态时,说明K8S试图启动这个pod,但是pod内有一个或多个的容器启动失败。可以通过过describe来查看pod的Event信息,通常从这些信息中可以找到ReasonExit Code等提示信息。

对于应用的失败,当然少不了查看应用日志。如果应用日志是输出到stdout的话(建议这样),就可以使用kubectl logs命令来查看日志。

小技巧:
对于pod被重启的情况,通常有用的日志信息在之前的容器,这时,可以加上--previous参数来查看容器前一个实例的日志

3. Missing ConfigMap or Secret

ConfigMap和Secret是在应用运行时将配置等信息注入最佳实践方式。但是,如果在应用启动前忘了创建ConfigMap或Secret,将会导致pod启动失败。

Missing ConfigMap

当pod要用到一个还没创建的ConfigMap时,状态会显示为RunContainerError。此时用kubectl describe可以查看事件信息,会有类似于:configmaps xxxxxxx not found的事件提示信息。

Missing Secret

假设pod将把名为myothersecret的Secret挂载作为数据卷,当myothersecret还并没有生成:

# missing-secret.yaml
apiVersion: v1
kind: Pod
metadata:
  name: secret-pod
spec:
  containers:
    - name: test-container
      image: gcr.io/google_containers/busybox
      command: [ "/bin/sh", "-c", "env" ]
      volumeMounts:
        - mountPath: /etc/secret/
          name: myothersecret
  restartPolicy: Never
  volumes:
    - name: myothersecret
      secret:
        secretName: myothersecret

执行kubectl create -f missing-secret.yaml后,会发现pod状态一直为ContainerCreating。同样,通过kubectl describe查看事件信息,会有类似于:secrets "myothersecret" not found的提示。

当创建所需的ConfigMap/Secret之后,容器将能够正常启动。

4. Liveness/Readiness Probe Failure

当使用容器和k8s的时候,我们需要知道的很重要的一点就是:容器能够运行,但并不意味着是正常工作的。

k8s提供了Liveness/Readiness Probe这两个特性(他们会定期的执行一个http请求或建立一个tcp连接),用来确认应用是否正常工作。如果Liveness Probe失败,k8s会杀掉容器并创建一个新的(此时,事件信息里会发现类似提示:container "xxxxxxxx" is unhealthy, it will be killed and re-created)。如果Readiness Probe失败,这个Pod将不会作为Service的可用后端,也就是不会有流量发送到这个Pod。

如下,该Pod定义了一个Liveness和Readiness Probe,他们以http方式定期的检查8080端口的/healthz地址:

apiVersion: v1
kind: Pod
metadata:
  name: liveness-pod
spec:
  containers:
    - name: test-container
      image: rosskukulinski/leaking-app
      livenessProbe:
        httpGet:
          path: /healthz
          port: 8080
        initialDelaySeconds: 3
        periodSeconds: 3
      readinessProbe:
        httpGet:
          path: /healthz
          port: 8080
        initialDelaySeconds: 3
        periodSeconds: 3

出现健康检查失败的三种可能情况:

  • Probes配置错了,如:探测URL错误;
  • 检查时间过短,如:应用还在启动期间就探测,可以考虑适当设大initialDelaySeconds
  • 应用确实不能够正常响应Probe,如:应用的数据库配置错误等会导致此类问题

通常,遇到问题时,先从查看Pod日志开始排查。

5. Exceeding CPU/Memory Limits

K8S的集群管理员是可以对容器和Pod设置CPU或内存的使用限制的,当在创建一个Deployment时,设置的请求资源大于了限定值,Deployment将无法部署成功。

例:如下Deployment中,resources.requests.memory设置为5Gi

# gateway.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: gateway
spec:
  template:
    metadata:
      labels:
        app: gateway
    spec:
      containers:
        - name: test-container
          image: nginx
          resources:
            requests:
              memory: 5Gi

执行kubectl create -f gateway.yaml后,并没有pod创建成功。通过kubectl describe查看此deployment:

$ kubectl describe deployment/gateway
Name:            gateway
Namespace:        fail
CreationTimestamp:    Sat, 11 Feb 2017 15:03:34 -0500
Labels:            app=gateway
Selector:        app=gateway
Replicas:        0 updated | 1 total | 0 available | 1 unavailable
StrategyType:        RollingUpdate
MinReadySeconds:    0
RollingUpdateStrategy:    0 max unavailable, 1 max surge
OldReplicaSets:        
NewReplicaSet:        gateway-764140025 (0/1 replicas created)
Events:
  FirstSeen    LastSeen    Count   From                SubObjectPath   Type        Reason          Message
  ---------    --------    -----   ----                -------------   --------    ------          -------
  4m        4m      1   {deployment-controller }            Normal      ScalingReplicaSet   Scaled up replica set gateway-764140025 to 1

可以看到,这个deployment创建了一个名为gateway-764140025的ReplicaSet,但是available还是0。再进一步kubectl describe查看这个ReplicaSet:

$ kubectl describe rs/gateway-764140025
Name:        gateway-764140025
Namespace:    fail
Image(s):    nginx
Selector:    app=gateway,pod-template-hash=764140025
Labels:        app=gateway
        pod-template-hash=764140025
Replicas:    0 current / 1 desired
Pods Status:    0 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
  FirstSeen    LastSeen    Count   From                SubObjectPath   Type        Reason      Message
  ---------    --------    -----   ----                -------------   --------    ------      -------
  6m        28s     15  {replicaset-controller }            Warning     FailedCreate    Error creating: pods "gateway-764140025-" is forbidden: [maximum memory usage per Pod is 100Mi, but request is 5368709120., maximum memory usage per Container is 100Mi, but request is 5Gi.]

这里就可以找到原因了:每个Pod和Container的最大内存可用值为100Mi,但是这里请求了5Gi。

注意:可以通过kubectl describe limitrange查看当前的资源限制信息。

6. Resource Quotas

与第5点的资源limits类似,K8S允许管理员为每个namespace设置Resource Quotas,比如:可运行的pod个数等。
当要创建的资源超过了限定的配额时,多出的申请将不会成功。

例:这里创建一个名为gateway-quota的Deployment

# test-quota.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: gateway-quota
spec:
  template:
    spec:
      containers:
        - name: test-container
          image: nginx

成功后查看到如下pod信息:

$ kubectl get pods
NAME                            READY     STATUS    RESTARTS   AGE
gateway-quota-551394438-pix5d   1/1       Running   0          16s

接下来,执行命令kubectl scale deploy/gateway-quota --replicas=3将其扩到3个pod。此时再查看pod信息:

$ kubectl get pods
NAME                            READY     STATUS    RESTARTS   AGE
gateway-quota-551394438-pix5d   1/1       Running   0          9m

发现pod仍然只有一个。执行kubectl describe deploy/gateway-quota查看信息:

$ kubectl describe deploy/gateway-quota
Name:            gateway-quota
Namespace:        fail
CreationTimestamp:    Sat, 11 Feb 2017 16:33:16 -0500
Labels:            app=gateway
Selector:        app=gateway
Replicas:        1 updated | 3 total | 1 available | 2 unavailable
StrategyType:        RollingUpdate
MinReadySeconds:    0
RollingUpdateStrategy:    1 max unavailable, 1 max surge
OldReplicaSets:        
NewReplicaSet:        gateway-quota-551394438 (1/3 replicas created)
Events:
  FirstSeen    LastSeen    Count   From                SubObjectPath   Type        Reason          Message
  ---------    --------    -----   ----                -------------   --------    ------          -------
  9m        9m      1   {deployment-controller }            Normal      ScalingReplicaSet   Scaled up replica set gateway-quota-551394438 to 1
  5m        5m      1   {deployment-controller }            Normal      ScalingReplicaSet   Scaled up replica set gateway-quota-551394438 to 3

可以看到,最后一行确实有执行扩展ReplicaSet到3个,但是unavailable值为2。继续kubectl describe replicaset查看对应的ReplicaSet信息:

kubectl describe replicaset gateway-quota-551394438
Name:        gateway-quota-551394438
Namespace:    fail
Image(s):    nginx
Selector:    app=gateway,pod-template-hash=551394438
Labels:        app=gateway
        pod-template-hash=551394438
Replicas:    1 current / 3 desired
Pods Status:    1 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
  FirstSeen    LastSeen    Count   From                SubObjectPath   Type        Reason          Message
  ---------    --------    -----   ----                -------------   --------    ------          -------
  11m        11m     1   {replicaset-controller }            Normal      SuccessfulCreate    Created pod: gateway-quota-551394438-pix5d
  11m        30s     33  {replicaset-controller }            Warning     FailedCreate        Error creating: pods "gateway-quota-551394438-" is forbidden: exceeded quota: compute-resources, requested: pods=1, used: pods=1, limited: pods=1

这里可以找到原因了:exceeded quota: compute-resources, requested: pods=1, used: pods=1, limited: pods=1

7. Insufficient Cluster Resources

如果你的集群没有做自动扩容,有一天可能会出现集群的CPU和内存资源耗尽的情况。这并不是指CPU和内存被完全用光,而是Kubernetes调度计量资源已被使用,而无法再进行调度分配。

假设有一个集群可用CPU资源为2,这里部署如下Deployment:

# cpu-scale.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: cpu-scale
spec:
  template:
    metadata:
      labels:
        app: cpu-scale
    spec:
      containers:
        - name: test-container
          image: nginx
          resources:
            requests:
              cpu: 1

此Deployment会消耗1个CPU的资源,同时,Kubernetes内部服务也会消耗一定的CPU/Memory资源,所以实际剩余可调度的CPU资源是小于1的。

如果此时执行kubectl scale deploy/cpu-scale --replicas=2以扩到2个pod的话,第二个pod就会处于Pending的状态:

$ kubectl scale deploy/cpu-scale --replicas=2
deployment "cpu-scale" scaled
$ kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
cpu-scale-908056305-phb4j   0/1       Pending   0          4m
cpu-scale-908056305-xstti   1/1       Running   0          5m

通过describe命令查看pod的日志:

$ kubectl describe pod cpu-scale-908056305-phb4j
Name:        cpu-scale-908056305-phb4j
Namespace:    fail
Node:        gke-ctm-1-sysdig2-35e99c16-qwds/10.128.0.4
Start Time:    Sun, 12 Feb 2017 08:57:51 -0500
Labels:        app=cpu-scale
        pod-template-hash=908056305
Status:        Pending
IP:        
Controllers:    ReplicaSet/cpu-scale-908056305
[...]
Events:
  FirstSeen    LastSeen    Count   From            SubObjectPath   Type        Reason          Message
  ---------    --------    -----   ----            -------------   --------    ------          -------
  3m        3m      1   {default-scheduler }            Warning     FailedScheduling    pod (cpu-scale-908056305-phb4j) failed to fit in any node
fit failure on node (gke-ctm-1-sysdig2-35e99c16-wx0s): Insufficient cpu
fit failure on node (gke-ctm-1-sysdig2-35e99c16-tgfm): Insufficient cpu
fit failure on node (gke-ctm-1-sysdig2-35e99c16-qwds): Insufficient cpu

可以看到原因是:调度系统不能找到符合要求的node节点(Insufficient cpu)而调度失败。

对于将希望实现Kubernetes集群自动伸缩功能的话,可以参考一下cluster-autoscaler工具。

8. PersistentVolume fails to mount

另一个常见的错误就是试图创建一个Deployment但指向的PersistentVolumes不存在。不论你使用的是哪一种持久卷,这种问题的结果都很相似。

如下是一个Deployment试图去使用名为my-data-disk的GCE PersistentDisk存储:

# volume-test.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: volume-test
spec:
  template:
    metadata:
      labels:
        app: volume-test
    spec:
      containers:
        - name: test-container
          image: nginx
          volumeMounts:
          - mountPath: /test
            name: test-volume
      volumes:
      - name: test-volume
        # This GCE PD must already exist (oops!)
        gcePersistentDisk:
          pdName: my-data-disk
          fsType: ext4

当执行创建后,发现容器一直处于ContainerCreating状态:

kubectl get pods
NAME                           READY     STATUS              RESTARTS   AGE
volume-test-3922807804-33nux   0/1       ContainerCreating   0          3m

查看事件日志:

$ kubectl describe pod volume-test-3922807804-33nux
Name:        volume-test-3922807804-33nux
Namespace:    fail
Node:        gke-ctm-1-sysdig2-35e99c16-qwds/10.128.0.4
Start Time:    Sun, 12 Feb 2017 09:24:50 -0500
Labels:        app=volume-test
        pod-template-hash=3922807804
Status:        Pending
IP:        
Controllers:    ReplicaSet/volume-test-3922807804
[...]
Volumes:
  test-volume:
    Type:    GCEPersistentDisk (a Persistent Disk resource in Google Compute Engine)
    PDName:    my-data-disk
    FSType:    ext4
    Partition:    0
    ReadOnly:    false
[...]
Events:
  FirstSeen    LastSeen    Count   From                        SubObjectPath   Type        Reason      Message
  ---------    --------    -----   ----                        -------------   --------    ------      -------
  4m        4m      1   {default-scheduler }                        Normal      Scheduled   Successfully assigned volume-test-3922807804-33nux to gke-ctm-1-sysdig2-35e99c16-qwds
  1m        1m      1   {kubelet gke-ctm-1-sysdig2-35e99c16-qwds}           Warning     FailedMount Unable to mount volumes for pod "volume-test-3922807804-33nux_fail(e2180d94-f12e-11e6-bd01-42010af0012c)": timeout expired waiting for volumes to attach/mount for pod "volume-test-3922807804-33nux"/"fail". list of unattached/unmounted volumes=[test-volume]
  1m        1m      1   {kubelet gke-ctm-1-sysdig2-35e99c16-qwds}           Warning     FailedSync  Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "volume-test-3922807804-33nux"/"fail". list of unattached/unmounted volumes=[test-volume]
  3m        50s     3   {controller-manager }                       Warning     FailedMount Failed to attach volume "test-volume" on node "gke-ctm-1-sysdig2-35e99c16-qwds" with: GCE persistent disk not found: diskName="my-data-disk" zone="us-central1-a"

可以看到,pod已经被成功调度到node节点,但是kubelet不能成功挂载期望的数据卷。最下面一行controller-manager的提示信息是最终的原因:GCE persistent disk not found: diskName="my-data-disk" zone="us-central1-a"。因为还没有创建这个my-data-disk,所以创建之后pod就能够正常起来了。

9. Validation Errors

当编写的资源配置yaml出现写法错误时,也是阻挠我们成功部署的常见错误。例如:

# test-application.deploy.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: test-app
spec:
  template:
    metadata:
      labels:
        app: test-app
    spec:
      containers:
      - image: nginx
        name: nginx
      resources:
        limits:
          cpu: 100m
          memory: 200Mi
        requests:
          cpu: 100m
          memory: 100Mi

配置咋一看上去,感觉没有什么问题。当执行的时候会出现如下的报错:

$ kubectl create -f test-application.deploy.yaml
error: error validating "test-application.deploy.yaml": error validating data: found invalid field resources for v1.PodSpec; if you choose to ignore these errors, turn validation off with --validate=false

结合报错信息,这时可以知道问题在于:resources字段不是v1.PodSpec下的,正确的应该是在v1.Container下面。解决的方法是,将resources字段的配置信息缩进到containsers下面。

除了上述这种字段声明错误外,单词拼写错误也是很常犯的,为了避免这种问题,建议在执行操作前做一些验证检查。比如:

  • 检查yaml语法正确:python -c 'import yaml,sys;yaml.safe_load(sys.stdin)' < test-application.deployment.yaml
  • 通过--dry-run参数来检查Kubernetes API对象是否正确:kubectl create -f test-application.deploy.yaml --dry-run --validate=true

10. Container Image Not Updating

关于镜像的拉取,有时候会有这种情况:你修改了镜像,但是仍用原来的名字和标签上传到了镜像仓库,而重新创建的pod用的镜像并没有被更新。

出现这种问题的原因是:没有正确的配置镜像拉取策略,即:ImagePullPolicy。改字段有3个可选值:

  • Always
  • Never
  • IfNotPresent

在没有配置策略的情况下,会采用默认策略:如果镜像的标签是latest,将按Always执行;如果镜像的标签不是latest,则采用IfNotPresent。

因此,为了解决个问题,能够想到如下3中方法:

  1. 使用latest标签(非常不推荐)
  2. 指明ImagePullPolicy策略为Always
  3. 每次变更使用能唯一标识的标签(如,使用commit id作为镜像标签等)

总结

实际部署过程中,可能会出现很多意料之外的问题,debug是在所难免。熟悉一下常用debug命令,能有助于快速定位问题:

    kubectl describe deployment/<deployname>
    kubectl describe replicaset/<rsname>
    kubectl get pods
    kubectl describe pod/<podname>
    kubectl logs <podname> --previous
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐