玩k8s?----深入理解常用五种控制器
深入理解常用控制器1.1 Pod与controller的关系controllers:在集群上管理和运行容器的对象。有时也称为工作负载(workload)通过label-selector相关联,如下图所示。Pod通过控制器实现应用的运维,如伸缩,滚动升级等1.2 无状态应用部署控制器 DeploymentDeployment功能:部署无状态应用(无状态应用简单来讲,就是Pod可以漂移任意节点,而不用
深入理解常用控制器
1.1 Pod与controller的关系
-
controllers:在集群上管理和运行容器的对象。有时也称为工作负载(workload)
-
通过label-selector相关联,如下图所示。
-
Pod通过控制器实现应用的运维,如伸缩,滚动升级等
1.2 无状态应用部署控制器 Deployment
Deployment功能:
-
部署无状态应用(无状态应用简单来讲,就是Pod可以漂移任意节点,而不用考虑数据和IP变化)
-
管理Pod和ReplicaSet(副本数量管理控制器)
-
具有上线部署、副本设定、滚动升级、回滚等功能
-
提供声明式更新,例如只更新一个新的Image
应用场景:Web服务,微服务
无状态:
1、不用过多基础环境,比如数据存储、网络ID
有状态:
1、Pod挂掉,IP发生了变化
2、启动顺序
3.分布式应用,主从,高可用
下图是Deployment 标准YAML,通过标签与Pod关联。
- deployment创建副本
[root@k8s-master ~]# kubectl create deployment web --image=nginx --dry-run -o yaml > deployment.yaml
W1005 22:08:01.792848 35705 helpers.go:535] --dry-run is deprecated and can be replaced with --dry-run=client.
[root@k8s-master ~]# ls
deployment.yaml
[root@k8s-master ~]# vim deployment.yaml
apiVersion: apps/v1
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: web
name: web
spec:
replicas: 3
selector:
matchLabels:
app: web
strategy: {}
template:
metadata:
labels:
app: web
spec:
containers:
- image: nginx
name: nginx
resources: {}
[root@k8s-master ~]# kubectl apply -f deployment.yaml
deployment.apps/web created
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
web-5dcb957ccc-4pr7t 1/1 Running 0 13s
web-5dcb957ccc-9zgmr 1/1 Running 0 13s
web-5dcb957ccc-fzpp9 1/1 Running 0 13s
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web-5dcb957ccc-4pr7t 1/1 Running 0 23s 10.244.0.13 k8s-node2 <none> <none>
web-5dcb957ccc-9zgmr 1/1 Running 0 23s 10.244.0.12 k8s-node2 <none> <none>
web-5dcb957ccc-fzpp9 1/1 Running 0 23s 10.244.1.16 k8s-node1 <none> <none>
[root@k8s-master ~]# kubectl get deployment "一个deployment控制多个副本数量"
NAME READY UP-TO-DATE AVAILABLE AGE
web 3/3 3 3 67s
[root@k8s-master ~]# kubectl get rs "管理副本数量"
NAME DESIRED CURRENT READY AGE
web-5dcb957ccc 3 3 3 2m16s
[root@k8s-master ~]# kubectl api-resources "查看资源.缩写"
[root@k8s-master ~]#
- 发布(暴露端口)
[root@k8s-master ~]# kubectl expose deploy web --port=80 --target-port=80 --type=NodePort --name=web --dry-run -o yaml > service.yaml
W1005 22:47:58.698704 50212 helpers.go:535] --dry-run is deprecated and can be replaced with --dry-run=client.
[root@k8s-master ~]# ls
service.yaml deployment.yaml
[root@k8s-master ~]# vim service.yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: web
name: web
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: web
type: NodePort
[root@k8s-master ~]# kubectl apply -f service.yaml
service/web created
[root@k8s-master ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 37h
tomcat NodePort 10.0.0.26 <none> 8080:32120/TCP 10h
web NodePort 10.0.0.46 <none> 80:30000/TCP 9s
升级项目,即更新最新镜像版本,这里换一个nginx镜像为例:
kubectl set image deployment/web nginx=nginx:1.15
kubectl rollout status deployment/web # 查看升级状态
如果该版本发布失败想回滚到上一个版本可以执行:
kubectl rollout undo deployment/web # 回滚最新版本
也可以回滚到指定发布记录:
kubectl rollout history deployment/web # 查看发布记录
kubectl rollout undo deployment/web --revision=2 # 回滚指定版本
扩容/缩容:
kubectl scale deployment nginx-deployment --replicas=5
--replicas设置比现在值大就是扩容,反之就是缩容。
-
kubectl set image 会触发滚动更新,即分批升级Pod。
-
滚动更新原理其实很简单,利用新旧两个replicaset,例如副本是3个,首先Scale Up增加新RS副本数量为1,准备就绪后,Scale Down减少旧RS副本数量为2,以此类推,逐渐替代,最终旧RS副本数量为0,新RS副本数量为3,完成本次更新。这个过程可通过kubectl describe deployment web看到。
RS作用?
1、控制副本数量
2、管理滚动升级(利用两个RS来实现)
3、发布版本管理
//创建一个deploy资源,跑nginx服务 最新版本
[root@master demo]# vim nginx-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: web
name: web
spec:
replicas: 3
selector:
matchLabels:
app: web
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: web
spec:
containers:
- image: nginx:latest
name: nginx
resources: {}
status: {}
[root@master demo]# kubectl apply -f nginx-deploy.yaml
deployment.apps/web created
[root@master demo]# kubectl get pod
NAME READY STATUS RESTARTS AGE
web-5688fdccd9-5spg5 1/1 Running 0 2m49s
web-5688fdccd9-ktktn 0/1 Running 0 2m49s
web-5688fdccd9-xdbzq 1/1 Running 0 6m25s
//滚动升级到1.54
[root@master demo]# kubectl edit deploy web
...
selector:
matchLabels:
app: web
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: web
spec:
containers:
- image: nginx:1.15.4 "设置版本1.154,观察更新"
imagePullPolicy: Always
name: nginx
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
...
[root@master demo]# kubectl edit deploy web
deployment.extensions/web edited
[root@master demo]# kubectl get pod -w
NAME READY STATUS RESTARTS AGE
web-5688fdccd9-5spg5 1/1 Running 0 30m
web-5688fdccd9-ktktn 1/1 Running 0 30m
web-5688fdccd9-xdbzq 1/1 Running 0 33m
web-656f546dcc-7wnsn 0/1 ContainerCreating 0 5s "先创建一个7wnsn"
web-656f546dcc-7wnsn 1/1 Running 0 14s "running后,删除一个"
web-5688fdccd9-ktktn 1/1 Terminating 0 30m "删除ktktn"
web-656f546dcc-tfntr 0/1 Pending 0 0s
web-656f546dcc-tfntr 0/1 Pending 0 0s
web-656f546dcc-tfntr 0/1 ContainerCreating 0 0s "再创一个tfntr"
web-5688fdccd9-ktktn 0/1 Terminating 0 30m
web-5688fdccd9-ktktn 0/1 Terminating 0 30m
web-5688fdccd9-ktktn 0/1 Terminating 0 30m
web-656f546dcc-tfntr 1/1 Running 0 27s "tfntr running"
web-5688fdccd9-5spg5 1/1 Terminating 0 30m "再删一个旧的资源"
web-656f546dcc-rrhb9 0/1 Pending 0 0s
web-656f546dcc-rrhb9 0/1 Pending 0 0s
web-656f546dcc-rrhb9 0/1 ContainerCreating 0 0s "再创一个"
web-5688fdccd9-5spg5 0/1 Terminating 0 30m
web-5688fdccd9-5spg5 0/1 Terminating 0 31m
web-5688fdccd9-5spg5 0/1 Terminating 0 31m
web-656f546dcc-rrhb9 1/1 Running 0 17s "再删一个"
web-5688fdccd9-xdbzq 1/1 Terminating 0 34m
web-5688fdccd9-xdbzq 0/1 Terminating 0 34m
web-5688fdccd9-xdbzq 0/1 Terminating 0 34m
web-5688fdccd9-xdbzq 0/1 Terminating 0 34m
//总共三轮,滚动更新完成
[root@master demo]# kubectl get pod
NAME READY STATUS RESTARTS AGE
web-656f546dcc-7wnsn 1/1 Running 0 11m
web-656f546dcc-rrhb9 1/1 Running 0 10m
web-656f546dcc-tfntr 1/1 Running 0 11m
1.3: 部署有状态应用–SatefulSet
https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
-
解决Pod独立生命周期,保持Pod启动顺序和唯一性
-
稳定,唯一的网络标识符,持久存储(例如:etcd配置文件,节点地址发生变化,将无法使用)
有序,优雅的部署和扩展、删除和终止(例如:mysql主从关系,先启动主,再启动从)
有序,滚动更新
-
应用场景:数据库
-
无状态:
1)deployment 认为所有的pod都是一样的
2)不用考虑顺序的要求
3)不用考虑在哪个node节点上运行
4)可以随意扩容和缩容
-
有状态
1)实例之间有差别,每个实例都有自己的独特性,元数据不同,例如etcd,zookeeper
2)实例之间不对等的关系,以及依靠外部存储的应用。
-
常规service和无头服务区别
1)service:一组Pod访问策略,提供cluster-IP群集之间通讯,还提供负载均衡和服务发现。
2)Headless service 无头服务,不需要cluster-IP,直接绑定具体的Pod的IP
示例:
- 创建service资源
[root@master demo]# vim nginx-service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-service
labels:
app: nginx
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
selector:
app: nginx
[root@master demo]# kubectl apply -f nginx-service.yaml
service/nginx-headless created
[root@master demo]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 15d
nginx-service NodePort 10.0.0.67 <none> 80:46194/TCP 7s
- 创建headless service资源
[root@master demo]# cat headless.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
[root@master demo]# kubectl apply -f headless.yaml
service/nginx created
[root@master demo]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 15d
nginx ClusterIP None <none> 80/TCP 9m40s
nginx-service NodePort 10.0.0.67 <none> 80:46194/TCP 66s
- 配置dns服务,使用yaml文件创建
//复制coredns.yaml到master01的root家目录
https://www.kubernetes.org.cn/4694.html
[root@master demo]# vim coredns.yaml
# Warning: This is a file generated from the base underscore template file: coredns.yaml.base
apiVersion: v1
kind: ServiceAccount '//系统账户,为pod中的进程和外部用户提供身份信息'
metadata:
name: coredns
namespace: kube-system '//指定名称空间'
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole '//创建访问权限的角色'
metadata:
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: Reconcile
name: system:coredns
rules:
- apiGroups:
- ""
resources:
- endpoints
- services
- pods
- namespaces
verbs:
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding '//创建集群角色绑定的用户'
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: EnsureExists
name: system:coredns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:coredns
subjects:
- kind: ServiceAccount
name: coredns
namespace: kube-system
---
apiVersion: v1
kind: ConfigMap '//通过此服务来更改服务发现的工作方式'
metadata:
name: coredns
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: EnsureExists
data:
Corefile: | '//是coreDNS的配置文件'
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
proxy . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "CoreDNS"
spec:
# replicas: not specified here:
# 1. In order to make Addon Manager do not reconcile this replicas parameter.
# 2. Default is 1.
# 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
serviceAccountName: coredns
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
- key: "CriticalAddonsOnly"
operator: "Exists"
containers:
- name: coredns
image: coredns/coredns:1.2.2
imagePullPolicy: IfNotPresent
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
args: [ "-conf", "/etc/coredns/Corefile" ]
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
readOnly: true
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- all
readOnlyRootFilesystem: true
dnsPolicy: Default
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile
---
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
annotations:
prometheus.io/port: "9153"
# Warning: This is a file generated from the base underscore template file: coredns.yaml.base
apiVersion: v1
kind: ServiceAccount
metadata:
name: coredns
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kube
[root@master demo]# kubectl apply -f coredns.yaml
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.extensions/coredns created
service/kube-dns created
[root@master demo]# kubectl get pod,svc -n kube-system
NAME READY STATUS RESTARTS AGE
pod/coredns-56684f94d6-czk8z 1/1 Running 0 29s
pod/kubernetes-dashboard-7dffbccd68-k62p8 1/1 Running 1 6d
pod/kuboard-78bcb484bc-6lxzm 1/1 Running 1 14d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kube-dns ClusterIP 10.0.0.2 <none> 53/UDP,53/TCP 29s
service/kubernetes-dashboard NodePort 10.0.0.233 <none> 443:30001/TCP 6d7h
service/kuboard NodePort 10.0.0.185 <none> 80:32567/TCP 14d
[root@master demo]#
创建一个测试的pod资源并验证DNS解析
[root@master demo]# vim test-dns.yaml
apiVersion: v1
kind: Pod
metadata:
name: dns-test
spec:
containers:
- name: busybox
image: busybox:1.28.4
args:
- /bin/sh
- -c
- sleep 36000
restartPolicy: Never
[root@master demo]# kubectl create -f test-dns.yaml
pod/dns-test created
[root@master demo]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 15d
nginx-headless ClusterIP None <none> 80/TCP 10m
[root@master demo]# vim test-dns.yaml
[root@master demo]# kubectl get pod
NAME READY STATUS RESTARTS AGE
dns-test 1/1 Running 0 45s "dns资源已经run"
web-656f546dcc-7wnsn 1/1 Running 0 24m
web-656f546dcc-rrhb9 1/1 Running 0 23m
web-656f546dcc-tfntr 1/1 Running 0 24m
//验证dns解析
[root@master demo]# kubectl exec -it dns-test sh
/ # nslookup kubernetes
Server: 10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local
Name: kubernetes
Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local
/ # nslookup nginx-service
Server: 10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local
Name: nginx-service
Address 1: 10.0.0.67 nginx-service.default.svc.cluster.local
//如果这里解析不了kubernetes重启 node节点的flannel和docker
[root@node01 ~]# systemctl restart flanneld
[root@node01 ~]# systemctl restart docker
dns 可以解析资源名称
- 创建statefulset 资源
[root@master demo]# vim statefulset.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: nginx-statefulset
namespace: default
spec:
serviceName: nginx
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
//清理下环境
[root@master demo]# kubectl get pod
NAME READY STATUS RESTARTS AGE
dns-test 1/1 Running 0 38m
web-656f546dcc-7wnsn 0/1 Terminating 1 78m
web-656f546dcc-rrhb9 0/1 Terminating 1 78m
web-656f546dcc-tfntr 0/1 Terminating 1 78m
[root@master demo]# kubectl get deploy
No resources found.
[root@master demo]# kubectl delete pod dns-test
pod "dns-test" deleted
[root@master demo]# kubectl get pod
No resources found.
//创建statefulset资源
[root@master demo]# kubectl apply -f statefulset.yaml
service/nginx unchanged
statefulset.apps/nginx-statefulset created
[root@master demo]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-statefulset-0 1/1 Running 0 3m45s
nginx-statefulset-1 1/1 Running 0 44s
nginx-statefulset-2 1/1 Running 0 39s
[root@master demo]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 15d
nginx ClusterIP None <none> 80/TCP 38m
nginx-service NodePort 10.0.0.67 <none> 80:46194/TCP 29m
[root@master demo]# kubectl get deploy
No resources found.
- 顺序创建Pod
对于一个拥有 N 个副本的 StatefulSet,Pod 被部署时是按照 {0 …… N-1} 的序号顺序创建的。在第一个终端中使用 kubectl get
检查输出
备注:在 nginx-statefulset-0
Pod 处于 Running和Ready 状态后 nginx-statefulset-1
Pod 才会被启动。
StatefulSet 中的 Pod 拥有一个唯一的顺序索引和稳定的网络身份标识。
这个标志基于 StatefulSet 控制器分配给每个 Pod 的唯一顺序索引。Pod 的名称的形式为<statefulset name>-<ordinal index>
每个 Pod 都拥有一个基于其顺序索引的稳定的主机名
- 创建测试dns容器
[root@master demo]# kubectl apply -f test-dns.yaml "上文编辑有之歌文件"
pod/dns-test created
[root@master demo]# kubectl get pod
NAME READY STATUS RESTARTS AGE
dns-test 1/1 Running 0 2s
nginx-statefulset-0 1/1 Running 0 6m57s
nginx-statefulset-1 1/1 Running 0 3m56s
nginx-statefulset-2 1/1 Running 0 3m51s
[root@master demo]# kubectl exec -it dns-test sh
/ # nslookup nginx
Server: 10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local
Name: nginx
Address 1: 172.17.43.2 nginx-statefulset-0.nginx.default.svc.cluster.local
Address 2: 172.17.43.4 nginx-statefulset-2.nginx.default.svc.cluster.local
Address 3: 172.17.10.2 nginx-statefulset-1.nginx.default.svc.cluster.local
/ # nslookup nginx-statefulset-0.nginx
Server: 10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local
Name: nginx-statefulset-0.nginx
Address 1: 172.17.43.2 nginx-statefulset-0.nginx.default.svc.cluster.local
//备注:可以手动将statefulset的pod删掉,然后K8S会根据控制器的副本数量自动重新创建,此时再次解析,会发现IP地址会变化
[root@master demo]# kubectl delete pod nginx-statefulset-2
pod "nginx-statefulset-2" deleted
[root@master demo]# kubectl get pod -w
NAME READY STATUS RESTARTS AGE
dns-test 1/1 Running 0 4m2s
nginx-statefulset-0 1/1 Running 0 10m
nginx-statefulset-1 1/1 Running 0 7m56s
nginx-statefulset-2 1/1 Running 0 5s
[root@master demo]# kubectl get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
pod/dns-test 1/1 Running 0 5m8s 172.17.10.5 192.168.100.190 <none>
pod/nginx-statefulset-0 1/1 Running 0 12m 172.17.43.2 192.168.100.180 <none>
pod/nginx-statefulset-1 1/1 Running 0 9m2s 172.17.10.2 192.168.100.190 <none>
pod/nginx-statefulset-2 1/1 Running 0 71s 172.17.43.4 192.168.100.180 <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 15d <none>
service/nginx ClusterIP None <none> 80/TCP 46m app=nginx
service/nginx-service NodePort 10.0.0.67 <none> 80:46194/TCP 38m app=nginx
NAME DESIRED CURRENT AGE CONTAINERS IMAGES
statefulset.apps/nginx-statefulset 3 3 12m nginx nginx:latest
[root@master demo]#
顺序终止 Pod
控制器会按照与 Pod 序号索引相反的顺序每次删除一个 Pod。在删除下一个 Pod 前会等待上一个被完全关闭。
更新 StatefulSet
StatefulSet 里的 Pod 采用和序号相反的顺序更新。在更新下一个 Pod 前,StatefulSet 控制器终止每个 Pod 并等待它们变成 Running 和 Ready。请注意,虽然在顺序后继者变成 Running 和 Ready 之前 StatefulSet 控制器不会更新下一个 Pod,但它仍然会重建任何在更新过程中发生故障的 Pod,使用的是它们当前的版本。已经接收到更新请求的 Pod 将会被恢复为更新的版本,没有收到请求的 Pod 则会被恢复为之前的版本。像这样,控制器尝试继续使应用保持健康并在出现间歇性故障时保持更新的一致性。
注:Deployment和StatefulSet的区别是,StatefulSet创建的资源是有身份的
身份的三要素为:
域名:nginx-statefulset-0.nginx
主机名:nginx-statefulset-0
存储:PVC
1.4 守护进程控制器 DaemonSet
DaemonSet功能:
-
在每一个Node上运行一个Pod
-
新加入的Node也同样会自动运行一个Pod
应用场景:Agent,flannel网络 例如监控采集工具,日志采集工具
[root@master demo]# vim daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx-daemonset
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.15.4
ports:
- containerPort: 80
[root@master demo]# kubectl apply -f daemonset.yaml
daemonset.apps/nginx-daemonset created
[root@master demo]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
nginx-daemonset-4qv94 1/1 Running 0 8s 172.17.10.6 192.168.100.190 <none>
nginx-daemonset-gsvcr 1/1 Running 0 8s 172.17.43.5 192.168.100.180 <none>
//daemonset会在每个node节点上都创建一个pods
1.5 批处理 Job & CronJob
Job:一次性执行
应用场景:离线数据处理,视频解码等业务, 大数据分析计算的服务
官方文档:https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/
#job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: pi
spec:
template:
spec:
containers:
- name: pi
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: Never # 作业失败后会不再尝试创建新的Pod
backoffLimit: 4 # .spec.backoffLimit字段限制重试次数。默认情况下,这个字段默认值是6。
[root@k8s-master ~]# kubectl apply -f job.yaml
job.batch/pi created
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
monitor-cj77z 1/1 Running 0 20m
monitor-k8dlg 1/1 Running 0 20m
pi-nm4nt 0/1 ContainerCreating 0 17s
[root@k8s-master ~]# kubectl get job "查看job资源"
NAME COMPLETIONS DURATION AGE
pi 1/1 55s 67s
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
monitor-cj77z 1/1 Running 0 21m
monitor-k8dlg 1/1 Running 0 21m
pi-nm4nt 0/1 Completed 0 75s
"一次性任务完成后是completed状态"
[root@k8s-master ~]# kubectl logs pi-nm4nt
3.14159265358979323846264338327950288419716939937510582097494459230781640628620899862803482534211706798214808651328230664709384460955058223172535940812848111745028410270193852110555964462294895493038196442881097566593344612847564823378678316527120190914564856692346034861045432664821339360726024914127372458700660631558817488152092096282925409171536436789259036001133053054882046652138414695194151160943305727036575959195309218611738193261179310511854807446237996274956735188575272489122793818301194912983367336244065664308602139494639522473719070217986094370277053921......
上述示例中将π计算到2000个位置并将其打印出来。完成大约需要10秒。
查看任务:
kubectl get pods,job
CronJob:定时任务,像Linux的Crontab一样。
应用场景:定时通知,定时备份, 周期性任务
官方文档: https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/
#cronjob.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *" "周期性计划语法和crontab一致"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure # 作业失败并返回状态码非0时,尝试创建新的Pod运行任务
上述示例中将每分钟打印一次Hello。
[root@k8s-master ~]# kubectl apply -f cronjob.yaml
cronjob.batch/hello created
[root@k8s-master ~]# kubectl get cronjob
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
hello */1 * * * * False 1 5s 65s
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-1601915100-sp76k 0/1 Completed 0 6s
monitor-cj77z 1/1 Running 0 33m
monitor-k8dlg 1/1 Running 0 33m
pi-nm4nt 0/1 Completed 0 13m
[root@k8s-master ~]# kubectl logs hello-1601915100-sp76k "查看log日志中的反馈信息"
Mon Oct 5 16:25:08 UTC 2020
Hello from the Kubernetes cluster
查看任务:
kubectl get pods,cronjob
更多推荐
所有评论(0)