如何在k8s中搭建efk收集集群日志

在离线环境部署一套日志采集系统我采用的是elasticsearch+kibana+flentd日志系统

首先跟大部分网友一样 创建ns,es的无头服务

yaml文件如下:
apiVersion: v1
kind: Namespace
metadata:
name: logging

kind: Service
apiVersion: v1
metadata:
name: elasticsearch
namespace: logging
labels:
app: elasticsearch
spec:
selector:
app: elasticsearch
clusterIP: None
ports:
- port: 9200
name: rest
- port: 9300
name: inter-node

2.部署es集群

apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es
namespace: logging
spec:
serviceName: elasticsearch
replicas: 3
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
nodeSelector:
es: log
initContainers:
- name: increase-vm-max-map
image: busybox
imagePullPolicy: “IfNotPresent”
command: [“sysctl”, “-w”, “vm.max_map_count=262144”]
securityContext:
privileged: true
- name: increase-fd-ulimit
image: busybox
command: [“sh”, “-c”, “ulimit -n 65536”]
securityContext:
privileged: true
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.1
imagePullPolicy: “IfNotPresent”
ports:
- name: rest
containerPort: 9200
- name: inter
containerPort: 9300
resources:
limits:
cpu: 1000m
requests:
cpu: 1000m
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
env:
- name: cluster.name
value: k8s-logs
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: cluster.initial_master_nodes
value: “es-0,es-1,es-2”
- name: discovery.zen.minimum_master_nodes
value: “2”
- name: discovery.seed_hosts
value: “elasticsearch”
- name: ES_JAVA_OPTS
value: “-Xms512m -Xmx512m”
- name: network.host
value: “0.0.0.0”
vloumes:
- name: data
hostpath:
path: /var/app/ocr
这里我采用的是hostpath的挂载方式,因为我还没有单独划分一块磁盘出来。接下来我们可以通过es的rest接口去测试一下es集群
使用下面的命令将本地端口9200 转发到 Elasticsearch 节点(如es-0)对应的端口:
$ kubectl port-forward es-0 9200:9200 --namespace=logging
Forwarding from 127.0.0.1:9200 -> 9200
Forwarding from [::1]:9200 -> 9200
另外开启终端执行:$ curl http://localhost:9200/_cluster/state?pretty
apiVersion: v1
kind: ConfigMap
metadata:
namespace: logging
name: kibana-config
labels:
app: kibana
data:
kibana.yml: |
server.name: kibana
server.host: “0.0.0.0”
i18n.locale: zh-CN #设置默认语言为中文
elasticsearch:
hosts: ${ELASTICSEARCH_HOSTS} #es集群连接地址,由于我这都都是k8s部署且在一个ns下,可以直接使用service name连接

apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: logging
labels:
app: kibana
spec:
ports:

  • port: 5601
    type: NodePort
    selector:
    app: kibana

apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
namespace: logging
labels:
app: kibana
spec:
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
nodeSelector:
es: log
containers:
- name: kibana
image: harbor.domain.com/efk/kibana:7.17.1
imagePullPolicy: “IfNotPresent”
resources:
limits:
cpu: 1000m
requests:
cpu: 1000m
env:
- name: ELASTICSEARCH_URL
value: http://elasticsearch:9200 #设置为handless service dns地址即可
- name: ELASTICSEARCH_HOSTS
value: http://elasticsearch:9200
ports:
- containerPort: 5601
这里我依旧没有挂载配置文件,通过nortport端口访问 正常能打开

3.以上我是直接参考的其他网友的操作,但是接下来如果还按照他们的配置的话,我发现自己的集群没有办法采集日志到es,浏览官网后发现,不需要自己去配置规则,官网把采集配置已经打包在镜像里面,

在这里插入图片描述
于是我的yaml文件只是换个镜像
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd-es
namespace: logging
labels:
k8s-app: fluentd-es
kubernetes.io/cluster-service: “true”
addonmanager.kubernetes.io/mode: Reconcile

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd-es
labels:
k8s-app: fluentd-es
kubernetes.io/cluster-service: “true”
addonmanager.kubernetes.io/mode: Reconcile
rules:

  • apiGroups:
    • “”
      resources:
    • “namespaces”
    • “pods”
      verbs:
    • “get”
    • “watch”
    • “list”

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd-es
labels:
k8s-app: fluentd-es
kubernetes.io/cluster-service: “true”
addonmanager.kubernetes.io/mode: Reconcile
subjects:

  • kind: ServiceAccount
    name: fluentd-es
    namespace: logging
    apiGroup: “”
    roleRef:
    kind: ClusterRole
    name: fluentd-es
    apiGroup: “”

apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd-es
namespace: logging
labels:
k8s-app: fluentd-es
kubernetes.io/cluster-service: “true”
addonmanager.kubernetes.io/mode: Reconcile
spec:
selector:
matchLabels:
k8s-app: fluentd-es
template:
metadata:
labels:
k8s-app: fluentd-es
kubernetes.io/cluster-service: “true”
# 此注释确保如果节点被驱逐,fluentd不会被驱逐,支持关键的基于 pod 注释的优先级方案。
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ‘’
spec:
serviceAccountName: fluentd-es
containers:
- name: fluentd-es
image: harbor.domain.com/efk/fluentd:v3.4.0
imagePullPolicy: “IfNotPresent”
env:
- name: FLUENTD_ARGS
value: --no-supervisor -q
resources:
limits:
memory: 500Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /data/docker/containers
readOnly: true
- name: config-volume
mountPath: /etc/fluent/config.
tolerations:
- operator: Exists
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: config-volume
configMap:
name: fluentd-config
我又发现一个bug,挂载的时候mountpath 跟hostpath目录要保持一致,hostpath路径为docker本地的日志存放位置

4.测试服务

测试的时候发现日志能正常推送,调es的rest接口也能看到索引,但是正常运行二天过后就出现了日志无法发送es的问题,初步判断可能是跟新搭建的es集群有关系,还有kibana照网友安装 会有安全问题,导致没办法把日志吐到es,正好我们公司有公共的es,于是我
在这里插入图片描述
在环境变量里面添加地址跟账号信息,发现可以正常展示日志,而且是实时的,于是改成公共的es,这样还不用我们去维护
建议:还是要以官方文档位置,每个人的环境都是不一样,不能完全照搬别人的搭建思路
参考文件:https://blog.csdn.net/qq_36200932/article/details/123166613

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐