k8s-DaemonSet 与 Job 使用
k8s-DaemonSet 与 Job 使用DaemonSetdaemonset 案例一:daemonset 案例二:Job部署一个 Job:部署一个失败的 Job:并行执行 Job:定时执行 Job:DaemonSetdeployment部署的fubenpod会分布到各个node上,每个node都可能运行好几本副本,daemonset的不同之处在于:每个node上最多只能运行一个副本。daemo
k8s-DaemonSet 与 Job 使用
DaemonSet
deployment部署的fubenpod会分布到各个node上,每个node都可能运行好几本副本,daemonset的不同之处在于:每个node上最多只能运行一个副本。
daemonset的典型应用场景有:
1.在集群的每个节点上运行储存 Daemon,比如 glusterd 或 ceph。
2.在每个节点上运行日志手机 Daemon, 比如 flunentd 或 logstash
3.在每个节点上运行监控 Daemon,比如 Prometheus Node exporte r或 collectd
kubernetes 自己就在用daemonset运行系统组件,执行如下命令:
[root@master ~]# kubectl get daemonsets.apps -n kube-system
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-flannel-ds-amd64 3 3 3 3 3 <none> 6d22h
kube-proxy 3 3 3 3 3 beta.kubernetes.io/os=linux 6d22h
Daemonset kube-flannel-ds 和 kube-proxy 分别负责在每个节点上运行flannel和kube-proxy组件:
[root@master job]# kubectl get pod --namespace=kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-6955765f44-g6r8s 1/1 Running 3 7d 10.244.0.13 master <none> <none>
coredns-6955765f44-rfs4k 1/1 Running 3 7d 10.244.0.12 master <none> <none>
etcd-master 1/1 Running 3 7d 192.168.19.160 master <none> <none>
kube-apiserver-master 1/1 Running 3 7d 192.168.19.160 master <none> <none>
kube-controller-manager-master 1/1 Running 3 7d 192.168.19.160 master <none> <none>
kube-flannel-ds-amd64-fh8xp 1/1 Running 4 7d 192.168.19.160 master <none> <none>
kube-flannel-ds-amd64-nrth4 1/1 Running 3 7d 192.168.19.162 node2 <none> <none>
kube-flannel-ds-amd64-pdnd2 1/1 Running 4 7d 192.168.19.161 node1 <none> <none>
kube-proxy-8fpsm 1/1 Running 3 7d 192.168.19.161 node1 <none> <none>
kube-proxy-bk6z5 1/1 Running 3 7d 192.168.19.160 master <none> <none>
kube-proxy-t4bqz 1/1 Running 3 7d 192.168.19.162 node2 <none> <none>
kube-scheduler-master 1/1 Running 3 7d 192.168.19.160 master <none> <none>
因为 flannel 和 kube-proxy 属于系统组件,需要在命令行通过- -namespace=lube-system 指定 namespace kube-system。如果不指定则只返回默认 namespace default 中的资源。
daemonset 案例一:
root@master daemon]# mkdir busybox-1
[root@master daemon]# cd busybox-1/
[root@master busybox-1]# vim busybox.yml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: busybox-daemonset
spec:
selector:
matchLabels:
app: busy
template:
metadata:
labels:
app: busy
spec:
containers:
- name: box
image: busybox
command: ["sh","-c","while true:do echo 'this test';sleep 10;done"]
[root@master busybox-1]# kubectl apply -f busybox.yml
daemonset.apps/busybox-daemonset created
查看:
[root@master busybox-1]# kubectl get daemonsets.apps
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
busybox-daemonset 2 2 0 2 0 <none> 7s
[root@master busybox-1]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
busybox-daemonset-949tl 0/1 Error 0 20s 10.244.1.15 node1 <none> <none>
busybox-daemonset-qpsw9 0/1 Error 0 20s 10.244.2.13 node2 <none> <none>
删除:
[root@master busybox-1]# kubectl delete daemonsets.apps busybox-daemonset
daemonset.apps "busybox-daemonset" deleted
daemonset 案例二:
Prometheus 是流行的系统监控方案, Node Exporter 是 Prometheus 的 agent, 以 Daemon 的形式运行在每个被监控点上
[root@master ~]# cd daemon/
[root@master daemon]# mkdir exporter
[root@master daemon]# cd exporter/
[root@master exporter]# vim export.yml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: export-daemonset
spec:
selector:
matchLabels:
app: export
template:
metadata:
labels:
app: export
spec:
hostNetwork: true
spec:
containers:
- name: export
image: prom/node-exporter
command:
- /bin/node_exporter
- --path.procfs
- /host/proc
- --path.sysfs
- /host/sys
- --collector.filesystem.ignored-mount-points
- ^/(sys|proc|dev|host|etc|rootfs/var/lib/docker/containers|rootfs/var/lib/docker/overlay2|rootfs/run/docker/netns|rootfs/var/lib/docker/devicemapper|rootfs/var/lib/docker/aufs)($$|/)
volumeMounts:
- name: proc
mountPath: /host/proc
- name: sys
mountPath: /host/sys
- name: root
mountPath: /rootfs
volumes:
- name: proc
hostPath:
path: /proc
- name: sys
hostPath:
path: /sys
- name: root
hostPath:
path: /
查看:
[root@master exporter]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
export-daemonset-9mp7j 1/1 Running 0 19s 10.244.1.16 node1 <none> <none>
export-daemonset-txmvz 1/1 Running 0 19s 10.244.2.14 node2 <none> <none>
直接使用 Host 的网络
设置容器启动命令
通过 Volume 将 Host 路径映射到容器中
Job
容器按照持续运行的时间分为两类:服务类容器和工作类容器。
服务类容器通常持续提供了服务,需要一直运行,比如 http server,daemon 等。工作类容器则是一次性任务,比如批处理程序,完成后容器就退出了。
Kubernetes 的 Deployment、ReplicaSet 和 DaemonSet 都用于管理服务类容器:对于工作类容器,我们用 Job。
部署一个 Job:
[root@master daemon]# mkdir job
[root@master daemon]# cd job/
[root@master job]# vim job.yml
apiVersion: batch/v1
kind: Job
metadata:
name: myjob
spec:
template:
metadata:
name: myjob
spec:
containers:
- name: job
image: busybox
command: ["echo","hello world"]
restartPolicy: Never
[root@master job]# kubectl apply -f job.yml
batch/v1是当前 Job的 apiVersion
指明当前资源的类型为 Job
restartPolicy 指定什么情况下需要重启容器,对于 Job,只能设置为 Never 或者 ONFailure。
对于其他的 controller(比如deployment)可以设置为 Always。
查看 Job 的状态:
[root@master job]# kubectl get jobs.batch
NAME COMPLETIONS DURATION AGE
myjob 0/1 14s 14s
查看 Pod 的状态:
[root@master job]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myjob-cxdwh 0/1 Completed 0 36s 10.244.1.17 node1 <none> <none>
查看 Pod 的标准输出:
[root@master job]# kubectl logs -f myjob-cxdwh
hello world
部署一个失败的 Job:
[root@master job]# cat job.yml
apiVersion: batch/v1
kind: Job
metadata:
name: myjob
spec:
template:
metadata:
name: myjob
spec:
containers:
- name: job
image: busybox
command: ["echos","hello world"]
restartPolicy: Never
[root@master job]# kubectl apply -f job.yml
查看状态:
[root@master job]# kubectl get pod
NAME READY STATUS RESTARTS AGE
myjob-mzxnn 0/1 RunContainerError 1 68s
[root@master job]# kubectl get pod
NAME READY STATUS RESTARTS AGE
myjob-mzxnn 0/1 CrashLoopBackOff 1 85s
当第一个 Pod 启动时,容器失败退出,根据 restartPolicy:Never,此容器不会被重启,但 Job DESIRED 的 Pod 是 1.,目前 successful 为 0,达不到要求。所以会启动新的pod 知道 successful为 1。为了终止这个行为,只能删除 Job。
并行执行 Job:
同时运行多个Pod ,提高job的执行效率,这个可以通过parallelism设置
[root@master job]# cat job-2.yml
apiVersion: batch/v1
kind: Job
metadata:
name: myjob
spec:
parallelism: 2
template:
metadata:
name: myjob
spec:
containers:
- name: job
image: busybox
command: ["echo","hello world"]
restartPolicy: OnFailure
查看 Pod:
[root@master job]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myjob-cnfw9 0/1 Completed 0 30s 10.244.2.17 node2 <none> <none>
myjob-ln2b8 0/1 Completed 0 30s 10.244.1.19 node1 <none> <none>
我们还可以通过 completions 设置 Job 成功完成 Pod 的总数:
[root@master job]# cat job-2.yml
apiVersion: batch/v1
kind: Job
metadata:
name: myjob
spec:
completions: 6
parallelism: 2
template:
metadata:
name: myjob
spec:
containers:
- name: job
image: busybox
command: ["echo","hello world"]
restartPolicy: OnFailure
每次运行两个 Pod,直到共有 6 个Pod成功完成。
查看 Pod:
[root@master ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myjob-2rmx8 0/1 Completed 0 113s 10.244.1.26 node1 <none> <none>
myjob-5t5kx 0/1 Completed 0 2m34s 10.244.1.21 node1 <none> <none>
myjob-7bl2b 0/1 Completed 0 117s 10.244.2.20 node2 <none> <none>
myjob-8bvdp 0/1 Completed 0 2m9s 10.244.1.24 node1 <none> <none>
myjob-dxjh7 0/1 Completed 0 2m19s 10.244.1.22 node1 <none> <none>
myjob-zxsx4 0/1 Completed 0 2m34s 10.244.2.19 node2 <none> <none>
定时执行 Job:
[root@master job]# cat cronjob.yml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: myjob
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: job
image: busybox
command: ["echo","hello world"]
restartPolicy: OnFailure
batch/v1beta1 是当前 Cronjob 的 apiVersion
指明当前资源的类型为 Cronjob
schedule 指定什么时候运行 Job,其格式与linux cron一致,这里 */1 * * * * 是每一分钟启动一次,按顺序依次是 分 时 日 月 周
jobtemplate 定义 Job 模板,格式与前边 Job 一致
更多推荐
所有评论(0)