k8s部署elk+filebeat+logstash+kafka集群(三)filebeat部署
k8s部署elk+filebeat+kafka集群,filebeat部署采集容器日志
·
rbac
# cat filebeat-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: elk
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: filebeat
labels:
app: filebeat-clsterrole
rules:
- apiGroups:
- ""
resources:
- nodes
- events
- namespaces
- pods
verbs:
- get
- watch
- list
- apiGroups:
- ""
resourceNames:
- filebeat-prospectors
resources:
- configmaps
verbs:
- get
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: filebeat
labels:
app: filebeat-clusterrolebinding
roleRef:
apiGroup: ""
kind: ClusterRole
name: filebeat
subjects:
- apiGroup: ""
kind: ServiceAccount
name: filebeat
namespace: elk
filebeat配置文件
# cat filebeat-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-configmap
namespace: elk
data:
filebeat.yml: |-
filebeat.autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
hints.enabled: true
hints.default_config:
type: container
paths:
- /var/log/containers/*${data.kubernetes.container.id}.log
tags: ["k8s"]
fields:
log_topic: k8s
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
setup.template.settings:
index.number_of_shards: 3
index.number_of_replicas: 1
index.codec: best_compression
_source.enabled: false
output.kafka:
hosts: ["kafka-kraft-statefulset-0.kafka-kraft-svc:9091","kafka-kraft-statefulset-1.kafka-kraft-svc:9091","kafka-kraft-statefulset-2.kafka-kraft-svc:9091"]
topic: '%{[fields.log_topic]}'
partition.round_robin:
reachable_only: false
required_acks: 1
compression: gzip
max_message_bytes: 1000000
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
当然也可以通过挂载容器日志的方式采集日志,如下
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-configmap
namespace: elk
data:
filebeat.yml: |-
filebeat.inputs:
- type: log
paths:
- /var/log/pods/*.log
fields:
node: ${HOSTNAME}
fields_under_root: true
# 此参数非常重要,filebeat默认会忽略软链接,不加此参数无法采集到日志
symlinks: true
tags: ["k8s"]
daemonSet
使用DaemonSet来使得每个节点都运行filebeat,采集每个节点的日志
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: elk
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
containers:
- name: filebeat
image: 3.127.33.174:8443/elk/filebeat:8.1.0
imagePullPolicy: IfNotPresent
command: [
"filebeat",
"-e",
"-c", "/etc/filebeat.yml"
]
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
resources:
limits:
memory: 1000Mi
cpu: 1000m
requests:
memory: 100Mi
cpu: 100m
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: data
mountPath: /usr/share/filebeat/data
- name: dockerlog
mountPath: /home/docker/docker/containers
- name: varlog
mountPath: /var/log/
readOnly: true
- name: timezone
mountPath: /etc/localtime
volumes:
- name: config
configMap:
defaultMode: 0644
name: filebeat-configmap
- name: dockerlog
hostPath:
path: /home/docker/docker/containers/
- name: varlog
hostPath:
path: /var/log/
- name: data
hostPath:
path: /home/k8s/data
type: DirectoryOrCreate
- name: timezone
hostPath:
path: /etc/localtime
tolerations:
- effect: NoExecute
key: dedicated
operator: Equal
value: gpu
- effect: NoSchedule
operator: Exists
查看容器运行情况kubectl get pods -n elk -o wide | grep filebeat
现在我们可以进kafka中查看消费情况,查看filebeat收集到的各容器日志,剩下的在logstash进行过滤筛选
更多推荐
已为社区贡献9条内容
所有评论(0)