Kubernetes 五种控制器类型
文章目录K8s五种控制器k8s的控制器类型Deployment控制器测试deployment控制器SatefulSet控制器创建无头服务的service资源和dns资源编写yaml文件并创建service资源配置dns服务,使用yaml文件创建创建一个测试的pod资源并验证DNS解析创建statefulset资源创建资源并测试DaemonSet控制器编写yaml文件并创建资源测试job控制器编写y
文章目录
K8s五种控制器
k8s的控制器类型
Kubernetes中内建了很多controller(控制器),这些相当于一个状态机,用来控制Pod的具体状态和行为
deployment:适合无状态的服务部署
StatefullSet:适合有状态的服务部署
DaemonSet:一次部署,所有的node节点都会部署,例如一些典型的应用场景:
运行集群存储 daemon,例如在每个Node上运行 glusterd、ceph
在每个Node上运行日志收集 daemon,例如 fluentd、 logstash
在每个Node上运行监控 daemon,例如 Prometheus Node Exporter
Job:一次性的执行任务
Cronjob:周期性的执行任务
控制器又被称为工作负载,pod通过控制器实现应用的运维,比如伸缩、升级等
Deployment控制器
适合部署无状态的应用服务,用来管理pod和replicaset,具有上线部署、副本设定、滚动更新、回滚等功能,还可提供声明式更新,例如只更新一个新的Image
测试deployment控制器
编写yaml文件,并创建nginx服务pod资源
[root@localhost bate3]# vim nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment ##控制器类型
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3 ##指定副本数为3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx1
image: nginx:1.15.4
ports:
- containerPort: 80
[root@localhost bate3]# kubectl create -f nginx-deployment.yaml
deployment.apps/nginx-deployment created
[root@localhost bate3]# kubectl get pod -w
NAME READY STATUS RESTARTS AGE
nginx-deployment-78cdb5b557-g42hm 0/1 ContainerCreating 0 19s
nginx-deployment-78cdb5b557-rbpgh 0/1 ContainerCreating 0 19s
nginx-deployment-78cdb5b557-zt2ld 1/1 Running 0 19s
nginx-deployment-78cdb5b557-rbpgh 1/1 Running 0 60s
nginx-deployment-78cdb5b557-g42hm 1/1 Running 0 81s
查看控制器参数:可以使用describe或者edit两种方式
[root@master test]# kubectl describe deploy nginx-deployment
##或者使用edit
[root@master test]# kubectl edit deploy nginx-deployment
##两种方式都可以查看pod的更详细的信息,包括各种类型的名称、资源、事件等
...省略内容
strategy:
rollingUpdate: ##此段解释的是滚动更新机制'
maxSurge: 25% ##25%指的是pod数量的百分比,最多可以扩容125%
maxUnavailable: 25% ##25%指的是pod数量的百分比,最多可以缩容75%
type: RollingUpdate
…省略内容
SatefulSet控制器
1、适合部署有状态应用
2、解决Pod的独立生命周期,保持Pod启动顺序和唯一性
3、稳定,唯一的网络标识符,持久存储(例如:etcd配置文件,节点地址发生变化,将无法使用)
4、有序,优雅的部署和扩展、删除和终止(例如:mysql主从关系,先启动主,再启动从)
5、有序,滚动更新
6、应用场景:例如数据库
无状态服务的特点:
- deployment 认为所有的pod都是一样的
- 不用考虑顺序的要求
- 不用考虑在哪个node节点上运行
- 可以随意扩容和缩容
有状态服务的特点:
- 实例之间有差别,每个实例都有自己的独特性,元数据不同,例如etcd,zookeeper
- 实例之间不对等的关系,以及依靠外部存储的应用。
常规的service服务和无头服务的区别
service:一组Pod访问策略,提供cluster-IP群集之间通讯,还提供负载均衡和服务发现。
Headless service 无头服务,不需要cluster-IP,直接绑定具体的Pod的IP,无头服务经常用于statefulset的有状态部署
创建无头服务的service资源和dns资源
由于有状态服务的IP地址是动态的,所以使用无头服务的时候要绑定dns服务
编写yaml文件并创建service资源
[root@localhost bate3]# vim nginx-headless.yaml
apiVersion: v1
kind: Service ##创建一个service类型的资源
metadata:
name: nginx-headless
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None ##不使用clusterIP
selector:
app: nginx
[root@localhost bate3]# kubectl create -f nginx-headless.yaml
service/nginx-headless created
[root@localhost bate3]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
http-service NodePort 10.0.0.138 <none> 80:42018/TCP 2d9h
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 14d
mytomcat NodePort 10.0.0.61 <none> 80:30800/TCP 43h
nginx-headless ClusterIP None <none> 80/TCP 7s
配置dns服务,使用yaml文件创建
[root@localhost bate3]# vim coredns.yaml
# Warning: This is a file generated from the base underscore template file: coredns.yaml.base
apiVersion: v1
kind: ServiceAccount ##系统账户,为pod中的进程和外部用户提供身份信息
metadata:
name: coredns
namespace: kube-system ##指定名称空间
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole ##创建访问权限的角色
metadata:
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: Reconcile
name: system:coredns
rules:
- apiGroups:
- ""
resources:
- endpoints
- services
- pods
- namespaces
verbs:
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding ##创建集群角色绑定的用户
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: EnsureExists
name: system:coredns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:coredns
subjects:
- kind: ServiceAccount
name: coredns
namespace: kube-system
---
apiVersion: v1
kind: ConfigMap ##通过此服务来更改服务发现的工作方式
metadata:
name: coredns
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: EnsureExists
data:
Corefile: | ##是coreDNS的配置文件
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
proxy . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "CoreDNS"
spec:
# replicas: not specified here:
# 1. In order to make Addon Manager do not reconcile this replicas parameter.
# 2. Default is 1.
# 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
serviceAccountName: coredns
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
- key: "CriticalAddonsOnly"
operator: "Exists"
containers:
- name: coredns
image: coredns/coredns:1.2.2
imagePullPolicy: IfNotPresent
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
args: [ "-conf", "/etc/coredns/Corefile" ]
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
readOnly: true
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- all
readOnlyRootFilesystem: true
dnsPolicy: Default
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile
---
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
annotations:
prometheus.io/port: "9153"
prometheus.io/scrape: "true"
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "CoreDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.0.0.2
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
[root@localhost bate3]# kubectl create -f coredns.yaml
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.extensions/coredns created
service/kube-dns created
[root@localhost bate3]# kubectl get svc,pod -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kube-dns ClusterIP 10.0.0.2 <none> 53/UDP,53/TCP 28s
service/kubernetes-dashboard NodePort 10.0.0.238 <none> 443:30001/TCP 6d
NAME READY STATUS RESTARTS AGE
pod/coredns-56684f94d6-bpszw 1/1 Running 0 28s
pod/kubernetes-dashboard-7dffbccd68-wc77c 1/1 Running 1 2d
创建一个测试的pod资源并验证DNS解析
[root@localhost bate3]# kubectl run -it --image=busybox:1.28.4 --rm --restart=Never sh
If you don't see a command prompt, try pressing enter.
/ # nslookup kubernetes
Server: 10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local
Name: kubernetes
Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local
/ # exit
pod "sh" deleted
创建statefulset资源
[root@localhost bate3]# vim statefulset-test.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None ##指定为无头服务
selector:
app: nginx
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: nginx-statefulset
namespace: default
spec:
serviceName: nginx
replicas: 3 ##指定副本数量
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
[root@localhost bate3]# vim pod-dns-test.yaml
apiVersion: v1
kind: Pod
metadata:
name: dns-test
spec:
containers:
- name: busybox
image: busybox:1.28.4
args:
- /bin/sh
- -c
- sleep 36000
restartPolicy: Never
[root@localhost bate3]# mv coredns.yaml /
[root@localhost bate3]# ls
nginx-deployment.yaml nginx-headless.yaml pod-dns-test.yaml statefulset-test.yaml
[root@localhost bate3]# kubectl delete -f .
deployment.apps "nginx-deployment" deleted
service "nginx-headless" deleted
Error from server (NotFound): error when deleting "pod-dns-test.yaml": pods "dns-test" not found
Error from server (NotFound): error when deleting "statefulset-test.yaml": services "nginx" not found
Error from server (NotFound): error when deleting "statefulset-test.yaml": statefulsets.apps "nginx-statefulset" not found
[root@localhost bate3]# mv /coredns.yaml /bate3
创建资源并测试
[root@localhost bate3]# kubectl apply -f coredns.yaml
serviceaccount/coredns unchanged
clusterrole.rbac.authorization.k8s.io/system:coredns unchanged
clusterrolebinding.rbac.authorization.k8s.io/system:coredns unchanged
configmap/coredns unchanged
deployment.extensions/coredns unchanged
service/kube-dns unchanged
[root@localhost bate3]# kubectl create -f statefulset-test.yaml
service/nginx created
statefulset.apps/nginx-statefulset created
[root@localhost bate3]# kubectl create -f pod-dns-test.yaml
pod/dns-test created
[root@localhost bate3]# kubectl get svc,pods
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/http-service NodePort 10.0.0.138 <none> 80:42018/TCP 2d9h
service/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 14d
service/mytomcat NodePort 10.0.0.61 <none> 80:30800/TCP 43h
service/nginx ClusterIP None <none> 80/TCP 68s
NAME READY STATUS RESTARTS AGE
pod/dns-test 1/1 Running 0 15s
pod/nginx-statefulset-0 1/1 Running 0 68s
pod/nginx-statefulset-1 1/1 Running 0 61s
pod/nginx-statefulset-2 1/1 Running 0 50s
[root@master test]# kubectl exec -it dns-test sh ##登陆pod资源进行测试
/ # nslookup nginx-statefulset-0.nginx
Server: 10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local
Name: nginx-statefulset-0.nginx
Address 1: 172.17.45.2 nginx-statefulset-0.nginx.default.svc.cluster.local
/ # nslookup nginx-statefulset-1.nginx
Server: 10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local
Name: nginx-statefulset-1.nginx
Address 1: 172.17.9.2 nginx-statefulset-1.nginx.default.svc.cluster.local
/ # exit
相比于Deployment而言,StatefulSet是有身份的!(序列编号区分唯一身份)
身份三要素:
1、域名 nginx-statefulset-0.nginx
2、主机名 nginx-statefulset-0
3、存储(PVC)
StatefulSet的有序部署和有序伸缩
有序部署(即0到N-1)
有序收缩,有序删除(即从N-1到0)
无论是部署还是删除,更新下一个 Pod 前,StatefulSet 控制器终止每个 Pod 并等待它们变成 Running 和 Ready。
DaemonSet控制器
在每一个Node上运行一个Pod
新加入的Node也同样会自动运行一个Pod
应用场景:监控,分布式存储,日志收集等
编写yaml文件并创建资源测试
[root@localhost bate3]# vim daemonset-test.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx-daemonset
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.15.4
ports:
- containerPort: 80
查看资源的部署情况
[root@localhost bate3]# vim daemonset-test.yaml
[root@localhost bate3]# kubectl create -f daemonset-test.yaml
daemonset.apps/nginx-daemonset created
[root@localhost bate3]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
dns-test 1/1 Running 0 8m24s 172.17.9.5 20.0.0.5 <none>
nginx-daemonset-4whnb 1/1 Running 0 4s 172.17.45.4 20.0.0.4 <none>
nginx-daemonset-6glzs 1/1 Running 0 4s 172.17.9.6 20.0.0.5 <none>
……略
job控制器
一次性执行任务,类似Linux中的job
应用场景:如离线数据处理,视频解码等业务
编写yaml文件并创建资源
[root@localhost bate3]# vim job-test.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: pi
spec:
template:
spec:
containers:
- name: pi
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] ##命令是计算π的值
restartPolicy: Never
backoffLimit: 4 ##重试次数默认是6次,修改为4次,当遇到异常时Never状态会重启,所以要设定次数。
[root@localhost bate3]# kubectl create -f job-test.yaml
job.batch/pi created
查看job资源
[root@master test]# kubectl get pod -w
NAME READY STATUS RESTARTS AGE
dns-test 1/1 Running 0 23m
nginx-daemonset-m8lm5 1/1 Running 0 5m33s
nginx-daemonset-sswfq 1/1 Running 0 5m33s
nginx-statefulset-0 1/1 Running 0 23m
nginx-statefulset-1 1/1 Running 0 23m
nginx-statefulset-2 1/1 Running 0 23m
pi-dhzrg 0/1 ContainerCreating 0 50s
pi-dhzrg 1/1 Running 0 61s
pi-dhzrg 0/1 Completed 0 65s ##执行成功后就结束了
^C
[root@master test]# kubectl logs pi-dhzrg '//查看日志可以查看结果'
3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989380952572010654858632788659361533818279682303019520353018529689957736225994138912497217752834791315155748572424541506959508295331168617278558890750983817546374649393192550604009277016711390098488240128583616035637076601047101819429555961989467678374494482553797747268471040475346462080466842590694912933136770289891521047521620569660240580381501935112533824300355876402474964732639141992726042699227967823547816360093417216412199245863150302861829745557067498385054945885869269956909272107975093029553211653449872027559602364806654991198818347977535663698074265425278625518184175746728909777727938000816470600161452491921732172147723501414419735685481613611573525521334757418494684385233239073941433345477624168625189835694855620992192221842725502542568876717904946016534668049886272327917860857843838279679766814541009538837863609506800642251252051173929848960841284886269456042419652850222106611863067442786220391949450471237137869609563643719172874677646575739624138908658326459958133904780275901
查看并删除job资源
^C[root@localhost bate3]# kubectl delete -f job-test.yaml
job.batch "pi" deleted
[root@localhost bate3]# kubectl get job
No resources found.
cronjob控制器
周期性任务,像Linux的Crontab一样。
应用场景:如通知,备份等
编写yaml文件并创建资源
建立每分钟打印hello的任务
[root@localhost bate3]# vim cronjob-test.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure ##异常退出时重启
[root@localhost bate3]# kubectl create -f cronjob-test.yaml
cronjob.batch/hello created
查看pod资源
[root@localhost bate3]# kubectl get pod -w
NAME READY STATUS RESTARTS AGE
……略
hello-1602725700-tp7kw 0/1 Pending 0 0s
hello-1602725700-tp7kw 0/1 Pending 0 0s
hello-1602725700-tp7kw 0/1 ContainerCreating 0 0s
hello-1602725700-tp7kw 0/1 Completed 0 4s
hello-1602725760-gtxbv 0/1 Pending 0 0s
hello-1602725760-gtxbv 0/1 Pending 0 0s
hello-1602725760-gtxbv 0/1 ContainerCreating 0 0s
hello-1602725760-gtxbv 0/1 Completed 0 2s
……略
查看日志信息
[root@localhost bate3]# kubectl logs hello-1602725700-tp7kw
Thu Oct 15 01:35:04 UTC 2020
Hello from the Kubernetes cluster
[root@localhost bate3]# kubectl logs hello-1602725760-gtxbv
Thu Oct 15 01:36:01 UTC 2020
Hello from the Kubernetes cluster
记得删除,不然容易吃资源
[root@localhost bate3]# kubectl delete -f cronjob-test.yaml
cronjob.batch "hello" deleted
更多推荐
所有评论(0)