k8s (3)k8s的Pod管理
标签
Pod简介
什么是Pod?
kubernetes中的一切都可以理解为是一种资源对象,pod,rs,svc,都可以理解是 一种资源对象。
Pod分类
Pod分为两类:
- 自主式Pod:这种Pod本身是不能自我修复的。当Pod被创建后(不论是由你直接创建还是被其它controller创建),都会被k8s调度到集群的Node上。直到pod的进程终止、被删掉。因为缺少资源而被驱逐、或者Node故障之前这个Pod都会一直保持在个Node上。Pod不会自愈。如果Pod运行的Node故障,或者是调度器本身故障,这个Pod就会被删除。同样的,如果Pod所在Node缺少资源或者Pod处于维护状态,Pod也会被驱逐。
- 控制器管理的Pod:k8s使用更高级的称为Controller的抽象层,来管理Pod实例。Controller可以创建和管理多个Pod,提供副本管理、滚动升级和集群级别的自愈能力。例如。如果一个Node故障,Controller就能自动将该节点上的Pod调度到其他健康的Node上,虽然可以直接使用Pod,但是在K8s中通常是使用Controller来管理Pod的。
Pod特性
Pod标签
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-example
spec:
replicas: 4
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
运行:
[k8s@server1 ~]$ kubectl apply -f nginx.yml
deployment.apps/deployment-example created
[k8s@server1 ~]$ kubectl get pod
NAME READY STATUS RESTARTS AGE
deployment-example-7cf7d6dbc8-dzpgg 1/1 Running 0 27s
deployment-example-7cf7d6dbc8-r2qz8 1/1 Running 0 28s
deployment-example-7cf7d6dbc8-sx4mv 1/1 Running 0 27s
deployment-example-7cf7d6dbc8-v9s2d 1/1 Running 0 27s
查看pod的标签:
[k8s@server1 ~]$ kubectl get pod --show-labels
NAME READY STATUS RESTARTS AGE LABELS
deployment-example-7cf7d6dbc8-dzpgg 1/1 Running 0 13m app=nginx,pod-template-hash=7cf7d6dbc8
deployment-example-7cf7d6dbc8-r2qz8 1/1 Running 0 13m app=nginx,pod-template-hash=7cf7d6dbc8
deployment-example-7cf7d6dbc8-sx4mv 1/1 Running 0 13m app=nginx,pod-template-hash=7cf7d6dbc8
deployment-example-7cf7d6dbc8-v9s2d 1/1 Running 0 13m app=nginx,pod-template-hash=7cf7d6dbc8
过滤标签
[k8s@server1 ~]$ kubectl get pod -l app
NAME READY STATUS RESTARTS AGE
deployment-example-7cf7d6dbc8-dzpgg 1/1 Running 0 16m
deployment-example-7cf7d6dbc8-r2qz8 1/1 Running 0 16m
deployment-example-7cf7d6dbc8-sx4mv 1/1 Running 0 16m
deployment-example-7cf7d6dbc8-v9s2d 1/1 Running 0 16m
[k8s@server1 ~]$ kubectl get pod -L app
NAME READY STATUS RESTARTS AGE APP
deployment-example-7cf7d6dbc8-dzpgg 1/1 Running 0 17m nginx
deployment-example-7cf7d6dbc8-r2qz8 1/1 Running 0 17m nginx
deployment-example-7cf7d6dbc8-sx4mv 1/1 Running 0 17m nginx
deployment-example-7cf7d6dbc8-v9s2d 1/1 Running 0 17m nginx
修改标签:
[k8s@server1 ~]$ kubectl label pod deployment-example-7cf7d6dbc8-dzpgg app=apache --overwrite
pod/deployment-example-7cf7d6dbc8-dzpgg labeled
[k8s@server1 ~]$ kubectl get pod -L app
NAME READY STATUS RESTARTS AGE APP
deployment-example-7cf7d6dbc8-b26t7 1/1 Running 0 5s nginx
deployment-example-7cf7d6dbc8-dzpgg 1/1 Running 0 18m apache
deployment-example-7cf7d6dbc8-r2qz8 1/1 Running 0 18m nginx
deployment-example-7cf7d6dbc8-sx4mv 1/1 Running 0 18m nginx
deployment-example-7cf7d6dbc8-v9s2d 1/1 Running 0 18m nginx
[k8s@server1 ~]$ kubectl delete pod deployment-example-7cf7d6dbc8-dzpgg
pod "deployment-example-7cf7d6dbc8-dzpgg" deleted
[k8s@server1 ~]$ kubectl get pod -L app
NAME READY STATUS RESTARTS AGE APP
deployment-example-7cf7d6dbc8-b26t7 1/1 Running 0 5m39s nginx
deployment-example-7cf7d6dbc8-r2qz8 1/1 Running 0 24m nginx
deployment-example-7cf7d6dbc8-sx4mv 1/1 Running 0 24m nginx
deployment-example-7cf7d6dbc8-v9s2d 1/1 Running 0 24m nginx
[k8s@server1 ~]$ kubectl delete pod deployment-example-7cf7d6dbc8-b26t7
pod "deployment-example-7cf7d6dbc8-b26t7" deleted
[k8s@server1 ~]$ kubectl get pod -L app
NAME READY STATUS RESTARTS AGE APP
deployment-example-7cf7d6dbc8-dx7qd 1/1 Running 0 77s nginx
deployment-example-7cf7d6dbc8-r2qz8 1/1 Running 0 25m nginx
deployment-example-7cf7d6dbc8-sx4mv 1/1 Running 0 25m nginx
deployment-example-7cf7d6dbc8-v9s2d 1/1 Running 0 25m nginx
[k8s@server1 ~]$
因为控制器的筛选条件是标签为nginx的容器, 当其中一个容器标签被更改以后,脱离了控制器的控制,又因为控制器设置副本数为4,所以控制器会直接创建一个容器,所以标签为nginx的pod被删以后会自动重建,而标签不为nginx的pod不会自动重建,这就是有没有控制器的区别。如果将apache的标签改回nginx,此时标签为nginx的容器为5个,控制器则会删除多余的容器。
[k8s@server1 ~]$ kubectl get pod -L app
NAME READY STATUS RESTARTS AGE APP
deployment-example-7cf7d6dbc8-9ktfc 1/1 Running 0 4s nginx
deployment-example-7cf7d6dbc8-dx7qd 1/1 Running 0 8m31s apache
deployment-example-7cf7d6dbc8-r2qz8 1/1 Running 0 32m nginx
deployment-example-7cf7d6dbc8-sx4mv 1/1 Running 0 32m nginx
deployment-example-7cf7d6dbc8-v9s2d 1/1 Running 0 32m nginx
[k8s@server1 ~]$ kubectl label pod deployment-example-7cf7d6dbc8-dx7qd app=nginx --overwrite
pod/deployment-example-7cf7d6dbc8-dx7qd labeled
[k8s@server1 ~]$ kubectl get pod -L app
NAME READY STATUS RESTARTS AGE APP
deployment-example-7cf7d6dbc8-9ktfc 1/1 Running 0 19s nginx
deployment-example-7cf7d6dbc8-dx7qd 1/1 Terminating 0 8m46s nginx
deployment-example-7cf7d6dbc8-r2qz8 1/1 Running 0 33m nginx
deployment-example-7cf7d6dbc8-sx4mv 1/1 Running 0 33m nginx
deployment-example-7cf7d6dbc8-v9s2d 1/1 Running 0 33m nginx
[k8s@server1 ~]$ kubectl get pod -L app
NAME READY STATUS RESTARTS AGE APP
deployment-example-7cf7d6dbc8-9ktfc 1/1 Running 0 3m25s nginx
deployment-example-7cf7d6dbc8-r2qz8 1/1 Running 0 36m nginx
deployment-example-7cf7d6dbc8-sx4mv 1/1 Running 0 36m nginx
deployment-example-7cf7d6dbc8-v9s2d 1/1 Running 0 36m nginx
Node标签
设置server2 node的标签 disktype=ssd
[k8s@server1 ~]$ kubectl label node server2
.bash_history .bash_profile .config/ kube-flannel.yaml nginx.yml .viminfo
.bash_logout .bashrc .kube/ .local/ pod.yml
[k8s@server1 ~]$ kubectl label node server2 disktype=ssd
node/server2 labeled
[k8s@server1 ~]$ kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
server1 Ready control-plane,master 2d21h v1.20.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=server1,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=
server2 Ready <none> 2d20h v1.20.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=server2,kubernetes.io/os=linux
server3 Ready <none> 2d9h v1.20.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=server3,kubernetes.io/os=linux
设置控制器把pod调度到带有disktype=ssd的node上
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-example
spec:
replicas: 4
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
disktype: ssd
都调度到了server2上
[k8s@server1 ~]$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deployment-example-54b8d4d9f6-757ln 1/1 Running 0 3m14s 10.244.1.24 server2 <none> <none>
deployment-example-54b8d4d9f6-8rq5r 1/1 Running 0 3m14s 10.244.1.23 server2 <none> <none>
deployment-example-54b8d4d9f6-qbwlq 1/1 Running 0 3m8s 10.244.1.25 server2 <none> <none>
deployment-example-54b8d4d9f6-vlqgc 1/1 Running 0 3m6s 10.244.1.26 server2 <none> <none>
将server2标签去掉
[k8s@server1 ~]$ kubectl label nodes server2 disktype-
node/server2 labeled
[k8s@server1 ~]$ kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
server1 Ready control-plane,master 2d21h v1.20.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=server1,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=
server2 Ready <none> 2d20h v1.20.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=server2,kubernetes.io/os=linux
server3 Ready <none> 2d10h v1.20.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=server3,kubernetes.io/os=linux
此时将副本数改为8个
[k8s@server1 ~]$ kubectl apply -f nginx.yml
deployment.apps/deployment-example configured
[k8s@server1 ~]$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deployment-example-54b8d4d9f6-757ln 1/1 Running 0 10m 10.244.1.24 server2 <none> <none>
deployment-example-54b8d4d9f6-8rq5r 1/1 Running 0 10m 10.244.1.23 server2 <none> <none>
deployment-example-54b8d4d9f6-bs5w7 0/1 Pending 0 5s <none> <none> <none> <none>
deployment-example-54b8d4d9f6-hqvmf 0/1 Pending 0 5s <none> <none> <none> <none>
deployment-example-54b8d4d9f6-p25cc 0/1 Pending 0 5s <none> <none> <none> <none>
deployment-example-54b8d4d9f6-qbwlq 1/1 Running 0 10m 10.244.1.25 server2 <none> <none>
deployment-example-54b8d4d9f6-vlqgc 1/1 Running 0 10m 10.244.1.26 server2 <none> <none>
deployment-example-54b8d4d9f6-w9llw 0/1 Pending 0 5s <none> <none> <none> <none>
[k8s@server1 ~]$ kubectl describe pod deployment-example-54b8d4d9f6-bs5w7
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 34s (x2 over 34s) default-scheduler 0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) didn't match Pod's node affinity.
再给server3加上标签
[k8s@server1 ~]$ kubectl label nodes server3 disktype=ssd
node/server3 labeled
[k8s@server1 ~]$ kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
server1 Ready control-plane,master 2d21h v1.20.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=server1,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=
server2 Ready <none> 2d21h v1.20.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=server2,kubernetes.io/os=linux
server3 Ready <none> 2d10h v1.20.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=server3,kubernetes.io/os=linux
[k8s@server1 ~]$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deployment-example-54b8d4d9f6-757ln 1/1 Running 0 36m 10.244.1.24 server2 <none> <none>
deployment-example-54b8d4d9f6-8rq5r 1/1 Running 0 36m 10.244.1.23 server2 <none> <none>
deployment-example-54b8d4d9f6-bs5w7 1/1 Running 0 26m 10.244.3.18 server3 <none> <none>
deployment-example-54b8d4d9f6-hqvmf 1/1 Running 0 26m 10.244.3.19 server3 <none> <none>
deployment-example-54b8d4d9f6-p25cc 1/1 Running 0 26m 10.244.3.20 server3 <none> <none>
deployment-example-54b8d4d9f6-qbwlq 1/1 Running 0 36m 10.244.1.25 server2 <none> <none>
deployment-example-54b8d4d9f6-vlqgc 1/1 Running 0 36m 10.244.1.26 server2 <none> <none>
deployment-example-54b8d4d9f6-w9llw 1/1 Running 0 26m 10.244.3.21 server3 <none> <none>
可以发现k8s node上的标签只会影响未调度的pod,对已经调度的pod不会生效
Pod 在其生命周期中只会被调度一次。 一旦 Pod 被调度(分派)到某个节点,Pod 会一直在该节点运行,直到 Pod 停止或者 被终止。
Pod的状态及生命周期
Init容器
官方文档
每个 Pod 中可以包含多个容器, 应用运行在这些容器里面,同时 Pod 也可以有一个或多个先于应用容器启动的 Init 容器。
Init 容器与普通的容器非常像,除了如下两点:
- 它们总是运行到完成。
- 每个都必须在下一个启动之前成功完成。
如果 Pod 的 Init 容器失败,kubelet 会不断地重启该 Init 容器直到该容器成功为止。 然而,如果 Pod 对应的 restartPolicy 值为 “Never”,Kubernetes 不会重新启动 Pod。
apiVersion: v1
Kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: nginx
imagePullPolicy: IfNotPresent
initContainers:
- name: init-myservices
image: busyboxplus
imagePullPolicy: IfNotPresent
command: ['sh', '-c', "until nslookup myservice.default.svc.cluster.local; do echo waiting for myservices; sleep 2; done"]
启动后 kubectl describe pod
init-myservices:
Container ID: docker://fe8985e5895ea8e7557b74c86f298e4a3405396b0b98cf58a86ff8f7397683d0
Image: busyboxplus
Image ID: docker-pullable://busyboxplus@sha256:9d1c242c1fd588a1b8ec4461d33a9ba08071f0cc5bb2d50d4ca49e430014ab06
Port: <none>
Host Port: <none>
Command:
sh
-c
until nslookup myservice.default.svc.cluster.local; do echo waiting for myservices; sleep 2; done
State: Running
Started: Sun, 27 Dec 2020 14:22:42 +0800
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-4dd4l (ro)
创建一个service
kind: Service
apiVersion: v1
metadata:
name: myservice
spec:
ports:
- protocol: TCP
port: 80
targetPort: 80
[k8s@server1 ~]$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 14d
myservice ClusterIP 10.104.20.158 <none> 80/TCP 33s
解析到域名,容器启动
Init Containers:
init-myservices:
Container ID: docker://fe8985e5895ea8e7557b74c86f298e4a3405396b0b98cf58a86ff8f7397683d0
Image: busyboxplus
Image ID: docker-pullable://busyboxplus@sha256:9d1c242c1fd588a1b8ec4461d33a9ba08071f0cc5bb2d50d4ca49e430014ab06
Port: <none>
Host Port: <none>
Command:
sh
-c
until nslookup myservice.default.svc.cluster.local; do echo waiting for myservices; sleep 2; done
State: Terminated
Reason: Completed
Exit Code: 0
Started: Sun, 27 Dec 2020 14:22:42 +0800
Finished: Sun, 27 Dec 2020 14:31:34 +0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-4dd4l (ro)
配置探针
存活探针
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-example
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
livenessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 3
periodSeconds: 2
timeoutSeconds: 3
initContainers:
- name: init-myservices
image: busyboxplus
imagePullPolicy: IfNotPresent
command: ['sh', '-c', "until nslookup myservice.default.svc.cluster.local; do echo waiting for myservices; sleep 2; done"]
kubectl describe pod
Liveness: tcp-socket :80 delay=3s timeout=3s period=2s #success=1 #failure=3
停止后容器直接退出
就绪探针
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-example
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
livenessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 3
periodSeconds: 2
timeoutSeconds: 3
readinessProbe:
httpGet:
path: /test.html
port: 80
initialDelaySeconds: 1
periodSeconds: 3
timeoutSeconds: 1
initContainers:
- name: init-myservices
image: busyboxplus
imagePullPolicy: IfNotPresent
command: ['sh', '-c', "until nslookup myservice.default.svc.cluster.local; do echo waiting for myservices; sleep 2; done"]
虽然容器running但没有ready
[k8s@server1 ~]$ kubectl get pod
NAME READY STATUS RESTARTS AGE
deployment-example-557997bffc-5f2js 0/1 Running 0 70s
deployment-example-557997bffc-tjrgx 0/1 Running 0 70s
kubectl describe pod
Warning Unhealthy 2m34s (x19 over 3m28s) kubelet Readiness probe failed: HTTP probe failed with statuscode: 404
创建test.html
[k8s@server1 ~]$ kubectl exec deployment-example-557997bffc-5f2js -it -- bash
root@deployment-example-557997bffc-5f2js:/# echo hello world > /usr/share/nginx/html/test.html
root@deployment-example-557997bffc-5f2js:/# curl localhost/test.html
hello world
root@deployment-example-557997bffc-5f2js:/# exit
[k8s@server1 ~]$ kubectl get pod
NAME READY STATUS RESTARTS AGE
deployment-example-557997bffc-5f2js 1/1 Running 0 6m2s
deployment-example-557997bffc-tjrgx 0/1 Running 0 6m2s
可以发现在node1上ready了
更多推荐
所有评论(0)