一、Daemonset介绍

1.概述

DaemonSet 确保全部(或者某些)节点上运行一个 Pod 的副本。 当有节点加入集群时, 也会为他们新增一个 Pod 。 当有节点从集群移除时,这些 Pod 也会被回收。删除 DaemonSet 将会删除它创建的所有 Pod。
DaemonSet 的一些典型用法:

  • 在每个节点上运行集群守护进程
  • 在每个节点上运行日志收集守护进程
  • 在每个节点上运行监控守护进程

一种简单的用法是为每种类型的守护进程在所有的节点上都启动一个 DaemonSet。 一个稍微复杂的用法是为同一种守护进程部署多个 DaemonSet;每个具有不同的标志, 并且对不同硬件类型具有不同的内存、CPU 要求。

2.特性

Deployment部署的副本Pod会分布在各个Node上,每个Node都可能运行一个或多个副本;而DaemonSet的不同之处在于每个Node上最多只能运行一个副本。

二、配置解读

1.yaml文件解读

kind: DaemonSet				# 资源类型
metadata:					# 元数据
  name: fluentd-elasticsearch	# 资源名称
  namespace: test				# 命名空间
  labels:						# 标签
    k8s-app: fluentd-logging
spec:							# 详情
  selector:						# 选择器
    matchLabels:				# 匹配标签
      name: fluentd-elasticsearch	
  template:						# Pod模板
    metadata:					# 元数据
      labels:					# Pod标签
        name: fluentd-elasticsearch
    spec:						# 详情
      tolerations:				# 容忍
      - key: node-role.kubernetes.io/control-plane # key
        operator: Exists							# 运算符
        effect: NoSchedule							# 效果
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      containers:				# 容器配置
      - name: fluentd-elasticsearch					# 容器名称
        image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2	# 镜像名称
        resources:				# 资源配额
          limits:				# 资源限制
            memory: 200Mi		# 内存 单位:Mi/Gi/M/G
          requests:				# 资源请求
            cpu: 100m			# CPU 单位:m 毫核
            memory: 200Mi		# 内存
        volumeMounts:			# 卷挂载
        - name: varlog			# 名称
          mountPath: /var/log	# 容器内的位置
      terminationGracePeriodSeconds: 30	# 优雅终止时间,单位 秒
      volumes:					# 卷资源
      - name: varlog			# 卷名称
        hostPath:				# 卷类型
          path: /var/log		# 主机的位置

2.污点和容忍(Taints和Tolerations)

详情见:https://kubernetes.io/zh-cn/docs/concepts/scheduling-eviction/taint-and-toleration/#example-use-cases

2.1污点

让node拒绝pod的运行。
被标记taint的节点就是存在问题的节点,比如资源不足、存在安全隐患要进行升级维护,希望新的pod不会被调度过来。但被标记taint的节点也有可能并非故障节点,仍是有效的工作节点,仍需将某些pod调度到这些节点上时可以通过使用Toleration属性来实现

2.2容忍

是应用于 Pod 上的。容忍度允许调度器调度带有对应污点的 Pod。 容忍度允许调度但并不保证调度:作为其功能的一部分, 调度器也会评估其他参数。

污点和容忍度(Toleration)相互配合,可以用来避免 Pod 被分配到不合适的节点上。每个节点上都可以应用一个或多个污点,这表示对于那些不能容忍这些污点的 Pod, 是不会被该节点接受的。

2.3配置

增加和删除污点

[root@k8s-master ~]# kubectl taint nodes k8s-node1 key1=value1:NoSchedule
node/k8s-node1 tainted
[root@k8s-master ~]# kubectl taint nodes k8s-node1 key1=value1:NoSchedule-
node/k8s-node1 untainted

容忍配置

tolerations:
- key: "key1"					# 对应taint的key
  operator: "Equal"				# 运算符
  value: "value1"				# 对应taint的值
  effect: "NoExecute"			# 对应taint的effect
  tolerationSeconds: 3600		# 容忍时间,过了这个时间自动驱逐
- key: "example-key"
    operator: "Exists"
    effect: "NoSchedule"

operator 的默认值是 Equal。
一个容忍度和一个污点相“匹配”是指它们有一样的键名和效果,并且:
如果 operator 是 Exists (此时容忍度不能指定 value),或者
如果 operator 是 Equal ,则它们的 value 应该相等

effect参数:
NoSchedule:node不被调度,但已运行的pod不会被驱逐
NoExcute:node不被调度,且已运行的pod会被驱逐
PreferNoScheduler:node尽量不被调度,但已运行的pod不会被驱逐

三、演练

1.创建一个daemonset

[root@k8s-master daemonset]# vim daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd-elasticsearch
  namespace: test
  labels:
    k8s-app: fluentd-logging
spec:
  selector:
    matchLabels:
      name: fluentd-elasticsearch
  template:
    metadata:
      labels:
        name: fluentd-elasticsearch
    spec:
      tolerations:
      - key: node-role.kubernetes.io/control-plane
        operator: Exists
        effect: NoSchedule
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      containers:
      - name: fluentd-elasticsearch
        image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
[root@k8s-master daemonset]# kubectl apply  -f daemonset.yaml
daemonset.apps/fluentd-elasticsearch created

2.查看daemonset更新策略

2.1查看ds信息

[root@k8s-master daemonset]# kubectl get pod -n test -o wide
NAME                          READY   STATUS    RESTARTS   AGE   IP               NODE         NOMINATED NODE   READINESS GATES
fluentd-elasticsearch-49zlr   1/1     Running   0          19m   10.244.169.188   k8s-node2    <none>           <none>
fluentd-elasticsearch-59d74   1/1     Running   0          19m   10.244.107.225   k8s-node3    <none>           <none>
fluentd-elasticsearch-8h4lk   1/1     Running   0          19m   10.244.235.196   k8s-master   <none>           <none>
fluentd-elasticsearch-8s8wj   1/1     Running   0          19m   10.244.36.111    k8s-node1    <none>           <none>
[root@k8s-master daemonset]# kubectl get ds -n test
NAME                    DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
fluentd-elasticsearch   4         4         4       4            4           <none>          17m

2.2查看ds更新策略

[root@k8s-master daemonset]# kubectl get ds/fluentd-elasticsearch -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}' -n test
RollingUpdate

3.滚动更新

[root@k8s-master daemonset]# kubectl set image ds/fluentd-elasticsearch fluentd-elasticsearch=quay.io/fluentd_elasticsearch/fluentd:v2.6.0 -n test
daemonset.apps/fluentd-elasticsearch image updated
[root@k8s-master daemonset]# kubectl rollout status ds/fluentd-elasticsearch -n test
daemon set "fluentd-elasticsearch" successfully rolled out
[root@k8s-master ~]# kubectl get pod -n test -w  ## 再打开一个终端,查看ds滚动更新过程
NAME                          READY   STATUS    RESTARTS   AGE
fluentd-elasticsearch-49zlr   1/1     Running   0          22m
fluentd-elasticsearch-59d74   1/1     Running   0          22m
fluentd-elasticsearch-8h4lk   1/1     Running   0          22m
fluentd-elasticsearch-8s8wj   1/1     Running   0          22m
nginx-854d5f75db-skscn        1/1     Running   0          28d




fluentd-elasticsearch-8h4lk   1/1     Terminating   0          22m
fluentd-elasticsearch-8h4lk   1/1     Terminating   0          22m
fluentd-elasticsearch-8h4lk   0/1     Terminating   0          22m
fluentd-elasticsearch-8h4lk   0/1     Terminating   0          22m
fluentd-elasticsearch-8h4lk   0/1     Terminating   0          22m
fluentd-elasticsearch-mwvvc   0/1     Pending       0          0s
fluentd-elasticsearch-mwvvc   0/1     Pending       0          0s
fluentd-elasticsearch-mwvvc   0/1     ContainerCreating   0          0s
fluentd-elasticsearch-mwvvc   0/1     ContainerCreating   0          1s
fluentd-elasticsearch-mwvvc   1/1     Running             0          11s
fluentd-elasticsearch-49zlr   1/1     Terminating         0          22m
fluentd-elasticsearch-49zlr   1/1     Terminating         0          22m
fluentd-elasticsearch-49zlr   0/1     Terminating         0          22m
fluentd-elasticsearch-49zlr   0/1     Terminating         0          22m
fluentd-elasticsearch-49zlr   0/1     Terminating         0          22m
fluentd-elasticsearch-wzpqx   0/1     Pending             0          0s
fluentd-elasticsearch-wzpqx   0/1     Pending             0          0s
fluentd-elasticsearch-wzpqx   0/1     ContainerCreating   0          0s
fluentd-elasticsearch-wzpqx   0/1     ContainerCreating   0          0s
fluentd-elasticsearch-wzpqx   1/1     Running             0          10s
fluentd-elasticsearch-8s8wj   1/1     Terminating         0          22m
fluentd-elasticsearch-8s8wj   1/1     Terminating         0          22m
fluentd-elasticsearch-8s8wj   0/1     Terminating         0          22m
fluentd-elasticsearch-8s8wj   0/1     Terminating         0          22m
fluentd-elasticsearch-8s8wj   0/1     Terminating         0          22m
fluentd-elasticsearch-2z766   0/1     Pending             0          0s
fluentd-elasticsearch-2z766   0/1     Pending             0          0s
fluentd-elasticsearch-2z766   0/1     ContainerCreating   0          0s
fluentd-elasticsearch-2z766   0/1     ContainerCreating   0          1s
fluentd-elasticsearch-2z766   1/1     Running             0          10s
fluentd-elasticsearch-59d74   1/1     Terminating         0          23m
fluentd-elasticsearch-59d74   1/1     Terminating         0          23m
fluentd-elasticsearch-59d74   0/1     Terminating         0          23m
fluentd-elasticsearch-59d74   0/1     Terminating         0          23m
fluentd-elasticsearch-59d74   0/1     Terminating         0          23m
fluentd-elasticsearch-2dg7v   0/1     Pending             0          0s
fluentd-elasticsearch-2dg7v   0/1     Pending             0          0s
fluentd-elasticsearch-2dg7v   0/1     ContainerCreating   0          0s
fluentd-elasticsearch-2dg7v   0/1     ContainerCreating   0          1s
fluentd-elasticsearch-2dg7v   1/1     Running             0          10s

4.回滚

4.1查看历史版本

[root@k8s-master ~]# kubectl rollout history ds/fluentd-elasticsearch -n test
daemonset.apps/fluentd-elasticsearch
REVISION  CHANGE-CAUSE
1         <none>
2         <none>

4.2查看当前镜像版本

[root@k8s-master ~]# kubectl describe ds fluentd-elasticsearch -n test | grep -i image
    Image:      quay.io/fluentd_elasticsearch/fluentd:v2.6.0

4.3回退到revision=1

[root@k8s-master ~]# kubectl rollout undo ds/fluentd-elasticsearch -n test --to-revision=1
daemonset.apps/fluentd-elasticsearch rolled back

4.4查看镜像版本

[root@k8s-master ~]# kubectl describe ds fluentd-elasticsearch -n test | grep -i image
    Image:      quay.io/fluentd_elasticsearch/fluentd:v2.5.2	# 从v2.6.0回退到2.5.2

4.5无法暂停滚动更新

目前暂停滚动更新只支持deployment

>[root@k8s-master ~]# kubectl rollout pause --help
Mark the provided resource as paused
 Paused resources will not be reconciled by a controller. Use "kubectl rollout resume" to resume a paused resource.Currently only deployments support being paused.

5.重启daemonset

[root@k8s-master ~]# kubectl rollout restart ds/fluentd-elasticsearch -n test
daemonset.apps/fluentd-elasticsearch restarted
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐