DaemonSet控制器

DaemonSet控制器是Kubernetes中的一种控制器,用于确保集群中的每个节点都运行一个Pod的副本。它通常用于在整个集群中部署一些系统级别的服务:

  • 在每一个node节点运行一个存储服务,例如gluster,ceph。
  • 在每一个node节点运行一个日志收集服务,例如fluentd,logstash。
  • 在每一个node节点运行一个监控服务,例如Prometheus Node Exporter,zabbix agent等。

DaemonSet控制器的工作方式是,它会监视集群中的节点,并确保在每个节点上都有一个Pod实例在运行。如果节点增加或减少,DaemonSet 控制器会相应地调整 Pod 的副本数量,以确保每个节点都有一个Pod在运行。

DaemonSet日志收集EFK

使用StatefulSet部署elasticsearch三节点集群,使用headless service集群通信,部署动态存储为elasticsearch提供存储服务。
在这里插入图片描述

部署StorageClass

请参考https://blog.csdn.net/qq42004/article/details/137113713?spm=1001.2014.3001.5502

部署elasticsearch

官方网站https://www.elastic.co/guide/en/elasticsearch/reference/6.0/getting-started.html

  • 部署Service

    部署一个NodePort的类型的Service方便后面查询集群信息。

apiVersion: v1
kind: Namespace
metadata:
  name: efk
  labels:
    app: efk
---
apiVersion: v1
kind: Service
metadata:
  name: efk
  labels:
    app: efk
  namespace: efk
spec:
  clusterIP: None
  selector:
    app: efk
  ports:
  - name: efk-port-http
    port: 9200
  - name: efk-port-inside
    port: 9300
---
apiVersion: v1
kind: Service
metadata:
  name: efk-http
  labels:
    app: efk-http
  namespace: efk
spec:
  selector:
    app: efk
  ports:
  - name: efk-port-http
    port: 9200
    protocol: TCP
    targetPort: 9200
    nodePort: 32005
  type: NodePort


elasticsearch的9200端口用于所有通过HTTP协议进行的API调用。包括搜索、聚合、监控、以及其他任何使用HTTP协议的请求,所有的客户端库都会使用该端口与elasticsearch进行交互。9300端口是一个自定义的二进制协议,用于集群中各节点之间的通信。用于诸如集群变更、主节点选举、节点加入/离开、分片分配等事项。

  • 部署elasticsearch
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: efk
  labels:
    app: efk
  namespace: efk
spec:
  replicas: 3
  serviceName: efk 
  selector:
    matchLabels:
      app: efk
  template:
    metadata:
      name: efk-con
      labels:
        app: efk
    spec:
      hostname: efk 
      initContainers:
      - name: chmod-data
        image: docker.io/library/busybox:latest
        imagePullPolicy: IfNotPresent
        command:
        - sh
        - "-c"
        - |
           chown -R 1000:1000 /usr/share/elasticsearch/data
        securityContext:
           privileged: true
        volumeMounts:
        - name: data 
          mountPath: /usr/share/elasticsearch/data
      - name: vm-max-count
        image: docker.io/library/busybox:latest
        imagePullPolicy: IfNotPresent
        command: ["sysctl","-w","vm.max_map_count=262144"]
        securityContext:
          privileged: true
      - name: ulimit
        image: docker.io/library/busybox:latest
        imagePullPolicy: IfNotPresent
        command: 
           [
              "sh",
              "-c",
              "ulimit -Hl unlimited && ulimit -Sl unlimited && ulimit -n 65536 && id",
            ]
    
        securityContext:
          privileged: true
      containers:
      - name: efk
        image: docker.elastic.co/elasticsearch/elasticsearch:7.17.19
        imagePullPolicy: IfNotPresent
        resources:
          limits: 
            cpu: "1"
            #memory: "512Mi"
          requests:
            cpu: "0.5"  
            #memory: "256Mi"   
        env: 
        - name: cluster.name
          value: efk-log
        - name: node.name
          valueFrom:
            fieldRef:
              fieldPath: metadata.name   
        - name: discovery.seed_hosts
          value: "efk-0.efk,efk-1.efk,efk-2.elk"
        - name: cluster.initial_master_nodes
          value: "efk-0,efk-1,efk-2"
        - name: bootstrap.memory_lock
          value: "false"
        - name: ES_JAVA_OPTS
          value: "-Xms512m -Xmx512m"
        ports:
        - containerPort: 9200
          name: port-http
          protocol: TCP
        - containerPort: 9300
          name: port-inside
          protocol: TCP
        volumeMounts:
        - name: data
          mountPath: /usr/share/elasticsearch/data
        - name: time
          mountPath: /etc/localtime
      volumes:
      - name: time
        hostPath: 
          path: /etc/localtime
          type: File 
    

  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      storageClassName: "nfs-stgc-delete"
      accessModes: ["ReadWriteOnce"]
      resources:
        requests:
          storage: 1Gi




在initContainers中修改部分系统参数(部署的时候可以用Docker跑一下部署版本elasticsearch看需要修改那些参数)

 securityContext:
      privileged: true

开启容器的特权模式。

 env: 
    - name: cluster.name
      value: elk-log
    - name: node.name
      valueFrom:
        fieldRef:
          fieldPath: metadata.name   
    - name: discovery.seed_hosts
      value: "efk-0.efk,efk-1.efk,efk-2.elk"
    - name: cluster.initial_master_nodes
      value: "efk-0,efk-1,efk-2"
    - name: bootstrap.memory_lock
      value: "false"
    - name: ES_JAVA_OPTS
      value: "-Xms512m -Xmx512m"
  1. node.name:处理elasticsearch中一个未引入元数据的报错。
  2. discovery.seed_hosts:配置节点发现。
  3. cluster.initial_master_nodes:当第一次启动一个全新的elasticsearch集群时,会有一个集群引导步骤,该步骤确定在 第一次选举中计算其选票的主合格节点集。在开发模式下,在未配置发现设置的情况下,此步骤由节点自身自动执行。由于这种自动引导本质上是不安全的,当您在生产模式下启动一个全新的集群时,您必须明确列出在第一次选举中应该计算其投票的主合格节点。此列表是使用cluster.initial_master_nodes设置设置的。在重新启动群集或向现有群集添加新节点时,不应使用此设置。
  4. ES_JAVA_OPTS:允许传递一些JVM(java虚拟机)的参数给elasticsearch进程。-Xms512m -Xmx512m指定了进行初始堆内存和最大内存大小为512M。
  5. bootstrap.memory_lock:锁定内存,禁用内存交换。k8s本身禁用了交换分区,此处设置为false(默认也是false)。

查看集群信息:

  • 查看节点状态:http://192.168.0.100:32005/_cat/nodes?v&pretty
ip           heap.percent ram.percent cpu load_1m load_5m load_15m node.role   master name
10.244.1.245           28          79   4    3.63    2.97     1.78 cdfhilmrstw -      efk-2
10.244.1.244           30          79   3    3.63    2.97     1.78 cdfhilmrstw -      efk-0
10.244.2.235           51          67   5    0.70    0.73     0.67 cdfhilmrstw *      efk-1
  • 查看集群健康信息:http://192.168.0.100:32005/_cluster/health?pretty 状态信息为:green
{
  "cluster_name" : "efk-log",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 3,
  "number_of_data_nodes" : 3,
  "active_primary_shards" : 3,
  "active_shards" : 6,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}
  • 查看集群的状态信息:http://192.168.0.100:32005/_cluster/state?pretty
{
  "cluster_name" : "efk-log",
  "cluster_uuid" : "STY6wpxzS0qHBy5XrjQRuw",
  "version" : 93,
  "state_uuid" : "bV0lB3d6TzOYIZc9WE92dg",
  "master_node" : "-KIR5smtTvCZE3RJSEPh1w",
  "blocks" : { },
  "nodes" : {
    "HEpcL5aSTdaEXdE_75319g" : {
      "name" : "efk-2",
      "ephemeral_id" : "ToczCgVFSCuloVO2gitEaA",
      "transport_address" : "10.244.1.245:9300",
      "attributes" : {
        "ml.machine_memory" : "3956289536",
        "ml.max_open_jobs" : "512",
        "xpack.installed" : "true",
        "ml.max_jvm_size" : "536870912",
        "transform.node" : "true"
      },
      "roles" : [
        "data",
        "data_cold",
        "data_content",
        "data_frozen",
        "data_hot",
        "data_warm",
        "ingest",
        "master",
        "ml",
        "remote_cluster_client",
        "transform"
      ]
    },
    "Qs7bNaTvS8CSdMmQuu9Jog" : {
      "name" : "efk-0",
      "ephemeral_id" : "Gdyw-MhAQhmeLl7k2Sm5HQ",
      "transport_address" : "10.244.1.244:9300",
      "attributes" : {
        "ml.machine_memory" : "3956289536",
        "ml.max_open_jobs" : "512",
        "xpack.installed" : "true",
        "ml.max_jvm_size" : "536870912",
        "transform.node" : "true"
      },
      "roles" : [
        "data",
        "data_cold",
        "data_content",
        "data_frozen",
        "data_hot",
        "data_warm",
        "ingest",
        "master",
        "ml",
        "remote_cluster_client",
        "transform"
      ]
    },
    "-KIR5smtTvCZE3RJSEPh1w" : {
      "name" : "efk-1",
      "ephemeral_id" : "J7o-nw8LQxaxw7O9kjvEIg",
      "transport_address" : "10.244.2.235:9300",
      "attributes" : {
        "ml.machine_memory" : "3956289536",
        "xpack.installed" : "true",
        "transform.node" : "true",
        "ml.max_open_jobs" : "512",
        "ml.max_jvm_size" : "536870912"
      },
      "roles" : [
        "data",
        "data_cold",
        "data_content",
        "data_frozen",
        "data_hot",
        "data_warm",
        "ingest",
        "master",
        "ml",
        "remote_cluster_client",
        "transform"
      ]
    }
  },
  .......

部署kibana

apiVersion: v1
kind: Service
metadata:
  name: kibana
  namespace: efk
  labels:
    app: kibana
spec:
  selector:
    app: kibana
  ports:
  - name: port-kibana
    port: 5601
    appProtocol: TCP
    nodePort: 30006
    targetPort: 5601
  type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata: 
  name: kibana
  namespace: efk
  labels:
    app: kibana
spec: 
  replicas: 1
  selector:
    matchLabels:
      app: kibana
  template:
    metadata:
      name: kibana
      labels:
        app: kibana
    spec:
      nodeSelector:
        app: ng
      containers:
      - name: kibana
        image: docker.elastic.co/kibana/kibana:7.17.19
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            cpu: "1"
          requests:
            cpu: "0.5"
        env:
          - name: ELASTICSEARCH_HOSTS
            value: http://efk:9200
        volumeMounts:
          - name: time
            mountPath: /etc/localtime
          - name: data-kibana
            mountPath: /etc/elk/kibana/data
            subPath: data
        ports:
        - name: kibana-port
          containerPort: 5601
      volumes:
      - name: time
        hostPath: 
          path: /etc/localtime
          type: File 
      - name: data-kibana
        hostPath:
          path: /mnt/kibana
env:
  - name: ELASTICSEARCH_HOSTS
    value: http://efk:9200

配置访问elasticsearch地址

DaemonSet部署fluentd

  • 创建配置文件
    k8s是基于Containerd部署的要读取的日志在/var/log/containers下,产生的cri日志需要进行解析不然会遇到格式错误的 JSON问题。简写一个配置文件:
apiVersion: v1
kind: ConfigMap
metadata:
  name: fluent
  labels:
    app: fluent
  namespace: efk
data:
  system.conf: |
    <source>
      @type tail
      format json
      path /var/log/containers/*.log
      parser cri
      tag kube.*
      read_from_head true
      <parse>
        @type cri 
        format regex
        #expression /^(?<time>.+) (?<stream>stdout|stderr)( (?<logtag>.))? (?<log>.*)$/
        Regex ^(?<time>[^ ]+) (?<stream>stdout|stderr) (?<logtag>[^ ]*) (?<message>.*)$
        
      </parse>
    </source>
    <match kube.**>
      @type elasticsearch
      host efk.efk.svc.cluster.local
      port 9200
      logstash_format true
      logstash_prefix kube
      logstash_dateformat %Y.%m.%d
      include_tag_key true
      tag_key @log_name
    </match>

apiVersion: v1
kind: ServiceAccount
metadata:
  name: fluentd
  namespace: efk
  labels:
    app: fluentd
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: fluentd
  labels:
    app: fluentd
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - namespaces
  verbs:
  - get
  - list
  - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: fluentd
roleRef:
  kind: ClusterRole
  name: fluentd
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: fluentd
  namespace: efk
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd
  namespace: efk
  labels:
    app: fluentd
spec:
  selector:
    matchLabels:
      app: fluentd
  template:
    metadata:
      labels:
        app: fluentd
    spec:
      serviceAccount: fluentd
      serviceAccountName: fluentd
      tolerations:
      - key: "node-role.kubernetes.io/control-plane"
        effect: "NoSchedule"
      initContainers:
      - name: init-fluentd
        image: docker.io/fluent/fluentd-kubernetes-daemonset:v1.16.5-debian-elasticsearch7-amd64-1.0
        imagePullPolicy: IfNotPresent
        command:
        - bash
        - "-c"
        - |
          cp /mnt/config-map/system.conf /mnt/conf.d
        volumeMounts:
        - name: conf
          mountPath: /mnt/conf.d
        - name: confg-map
          mountPath: /mnt/config-map
      containers:
      - name: fluentd
        image: docker.io/fluent/fluentd-kubernetes-daemonset:v1.16.5-debian-elasticsearch7-amd64-1.0
        imagePullPolicy: IfNotPresent
        env:
          - name: K8S_NODE_NAME
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName
          - name: FLUENT_ELASTICSEARCH_HOST
            value: "efk.efk.svc.cluster.local"
          - name: FLUENT_ELASTICSEARCH_PORT
            value: "9200"
          - name: FLUENT_ELASTICSEARCH_SCHEME
            value: "http"
          - name: FLUENTD_SYSTEMD_CONF
            value: disable
          - name: FLUENT_CONTAINER_TAIL_EXCLUDE_PATH
            value: /var/log/containers/fluent*
          - name: FLUENT_CONTAINER_TAIL_PARSER_TYPE
            value: /^(?<time>.+) (?<stream>stdout|stderr)( (?<logtag>.))? (?<log>.*)$/
        resources:
          limits:
            memory: 512Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: containerslogs
          mountPath: /var/log/containers
          readOnly: true
        - name: conf 
          mountPath: /fluentd/etc/conf.d
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: containerslogs
        hostPath:
          path: /var/log/containers
      - name: conf
        emptyDir: {}
      - name: confg-map
        configMap:
          name: fluent
    

  • Deployments和Daemonset区别联系

DaemonSet与Deployments非常类似,它们都能创建Pod,这些Pod对应的进程都不希望被终止掉(例如,Web 服务器、存储服务器)。无状态的Service使用Deployments,比如前端Frontend服务,实现对副本的数量进行扩缩容、平滑升级,比基于精确控制Pod运行在某个主机上要重要得多。 需要Pod副本总是运行在全部或特定主机上,并需要先于其他Pod启动,当这被认为非常重要时,应该使用daemonset。

  • 更新DaemonSet

    如果修改了节点标签,DaemonSet 将立刻向新匹配上的节点添加Pod,同时删除不能够匹配的节点上的Pod。可以修改 DaemonSet创建的Pod。然而不允许对Pod的所有字段进行更新。当下次某节点(即使具有相同的名称的Pod)被创建时,DaemonSet Controller还会使用最初的模板。你可以删除一个DaemonSet。如果使用kubectl并指定 --cascade=false选项,则Pod将被保留在节点上。然后可以创建具有不同模板的新DaemonSet。具有不同模板的新 DaemonSet 将能够通过标签匹配并识别所有已经存在的 Pod。 如果有任何Pod需要替换,则DaemonSet根据它的updateStrategy来替换

  • 与 DaemonSet 中的 Pod 进行通信

  1. Push:将DaemonSet中的Pod配置为将更新发送到其他服务,例如统计数据库。
  2. NodeIP 和已知端口:DaemonSet中的Pod可以使用hostPort,从而可以通过节点IP访问到 Pod。客户端能通过某种方法获取节点 IP 列表,并且基于此也可以获取到相应的端口。
  3. DNS:创建具有相同Pod Selector 的Headless Service,然后通过使用endpoints资源或从DNS中检索到多个A记录来发现DaemonSet。
  4. Service:创建具有相同Pod Selector的Service,并使用该Service随机访问到某个节点上的daemonset pod(没有办法访问到特定节点)。
  • 污点和容忍度

DaemonSet控制器会自动将一组容忍度添加到DaemonSet Pod:

在这里插入图片描述

你也可以在DaemonSet的Pod模板中定义自己的容忍度并将其添加到DaemonSet Pod。

查看kibana

点击Discover添加index pattern根据步骤配置完成,再次点击Discover查看日志信息。

在这里插入图片描述

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐