需求来源:

        采集K8S集群的容器日志,并集中存储。

解决方案:

        1、DaemonSet 

                以守护进程的方式运行Filebeat,Filebeat将采集日志通过logstash发送的JAVA程序中,再由JAVA程序处理后,集中存储起来。

        2、Sidecar 

                每个POD中额外增加一个Filebeat容器,Filebeat通过文件共享方式,读取相应的日志并通过logstash发送到JAVA程序中。

两个方式可以并存,并不冲突。DaemonSet方式采集容器的标准输出,如果有特殊需求,再通过Sidecar方式定制采集日志即可。

下面介绍的是daemonSet方式采集容器日志的内容:

先将K8S部署的yaml文件贴出来:

# 创建账户
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: itsm-node-manager
  name: itsm-node-manager
  namespace: kube-system
---
# 创建角色
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: itsm-node-manager
  name: itsm-node-manager-role
  namespace: kube-system
rules:
- apiGroups:
  - ""
  resources:
  - nodes
  - namespaces
  - events
  - pods
  verbs:
  - get
  - list
  - watch
---
# 账户与角色绑定
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: itsm-node-manager-role-binding
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: itsm-node-manager-role
subjects:
- kind: ServiceAccount
  name: itsm-node-manager
  namespace: kube-system
---
# 创建logstash配置文件
apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    k8s-app: itsm-node-manager
  name: logstash-config
  namespace: kube-system
data:
  logstash.yml: 'config.reload.automatic: true'
  pipeline.conf: |-
    input {
        beats {
            port => 5044
            codec => json
        }
    }
    filter {
    }
    output {
        http {
          http_method => "post"
          format => "json"
          # 此处配置程序的url路径,java代码会在下面贴出来。如果调用的是集群内部的程序,可以采用和filebeat一样的域名方式
          url => "http://192.168.0.195:8080/containerLog/insert"
          content_type => "application/json"
       }
    }
---
# 创建logstash
apiVersion: apps/v1
kind: Deployment
metadata:
  name: logstash
  namespace: kube-system
  labels:
    server: logstash-7.10.1
spec:
  selector:
    matchLabels:
      k8s-app: logstash
  template:
    metadata:
      creationTimestamp: null
      labels:
        k8s-app: logstash
      name: logstash
    spec:
      containers:
      - image: elastic/logstash:7.10.1
        imagePullPolicy: IfNotPresent
        name: logstash
        securityContext:
          procMount: Default
          runAsUser: 0
        volumeMounts:
        - mountPath: /usr/share/logstash/config/logstash.yml
          name: logstash-config
          readOnly: true
          subPath: logstash.yml
        - mountPath: /usr/share/logstash/pipeline/logstash.conf
          name: logstash-config
          readOnly: true
          subPath: pipeline.conf
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 120
      imagePullSecrets:
      - name: dockerpull
      volumes:
      - configMap:
          defaultMode: 420
          name: logstash-config
        name: logstash-config
---
# 创建logstash service
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: logstash
  name: logstash
  namespace: kube-system
spec:
  type: ClusterIP
  selector:
    k8s-app: logstash
  ports:
  - port: 5044
    protocol: TCP
    targetPort: 5044
---
# 创建filebeat配置文件
apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    k8s-app: itsm-node-manager
  name: filebeat-config
  namespace: kube-system
data:
  filebeat.yml: |-
     filebeat.autodiscover:
       providers:
         - type: kubernetes
           host: ${NODE_NAME}
           hints.enabled: true
           hints.default_config:
             type: container
             paths:
               - /var/log/containers/*${data.kubernetes.container.id}.log
     processors:
     - add_cloud_metadata:
     - add_host_metadata:
     output.logstash:
       hosts: ["logstash.kube-system.svc.cluster.local:5044"]     # kubectl -n logs get svc 
       enabled: true
---
# 创建filebeat守护进程
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    server: filebeat-7.10.1
spec:
  selector:
    matchLabels:
      name: filebeat
      kubernetes.io/cluster-service: "true"
  template:
    metadata:
      creationTimestamp: null
      labels:
        name: filebeat
        kubernetes.io/cluster-service: "true"
    spec:
      containers:
      - args:
        - -c
        - /etc/filebeat.yml
        - -e
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: spec.nodeName
        image: elastic/filebeat:7.10.1
        imagePullPolicy: IfNotPresent
        name: filebeat
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        securityContext:
          procMount: Default
          runAsUser: 0
        volumeMounts:
        - mountPath: /etc/filebeat.yml
          name: config
          readOnly: true
          subPath: filebeat.yml
        - mountPath: /usr/share/filebeat/data
          name: data
        - mountPath: /var/lib/docker/containers
          name: varlibdockercontainers
          readOnly: true
        - mountPath: /var/log
          name: varlog
          readOnly: true
      restartPolicy: Always
      serviceAccount: itsm-node-manager
      serviceAccountName: itsm-node-manager
      volumes:
      - configMap:
          defaultMode: 384
          name: filebeat-config
        name: config
      - hostPath:
          path: /var/lib/docker/containers
          type: ""
        name: varlibdockercontainers
      - hostPath:
          path: /var/log
          type: ""
        name: varlog
      - hostPath:
          path: /opt/filebeat/data
          type: DirectoryOrCreate
        name: data

这是将多个部署信息放在了一个yaml文件中,用“---”分隔开来。

以下是JAVA代码片段:

@Api(tags = "服务日志控制类")
@Slf4j
@RestController
@RequestMapping("/containerLog")
public class ContainerLogController {

    @Autowired
    private ContainerLogService containerLogService;

    @ApiOperation(value = "容器日志写入接口",produces = "application/json", response = String.class)
    @PostMapping("insert")
    public Result insert(HttpServletRequest httpServletRequest){
        BufferedReader br = null;
        StringBuilder sb = new StringBuilder("");
        try {
            br = httpServletRequest.getReader();
            String str;
            while ((str=br.readLine())!=null){
                sb.append(str);
            }
            containerLogService.insert(sb.toString());
        } catch (IOException e) {
            e.printStackTrace();
        }
        return Result.newSuccess();
    }
}

至此,便可获得logstash发送过来的日志信息了,容器日志均是Json格式。

其中有三个地方可以根据需求进行扩展:

1、filebeat采集规则

2、logstash过滤规则

3、程序处理逻辑

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐