loki采集k8s日志
loki 是轻量、易用的日志聚合系统。如果你的k8s集群规模并不大,推荐使用grafana+loki的方案来做微服务日志的采集;
·
前言
loki 是轻量、易用的日志聚合系统。如果你的k8s集群规模并不大,推荐使用grafana+loki的方案来做微服务日志的采集;
Loki组成
loki架构很简单,主要由3部分组成:
loki:服务端,负责存储日志和处理查询;
promtail:采集端,负责采集日志发送给loki;
grafana:负责采集日志的展示;
环境:
k8s v1.22.10
创建loki资源
vim loki-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: loki
namespace: thanos-monitoring
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: loki
namespace: thanos-monitoring
rules:
- apiGroups:
- extensions
resourceNames:
- loki
resources:
- podsecuritypolicies
verbs:
- use
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: loki
namespace: thanos-monitoring
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: loki
subjects:
- kind: ServiceAccount
name: loki
vim loki-nodeport.yaml
apiVersion: v1
kind: Service
metadata:
name: loki-nodeport
namespace: thanos-monitoring
spec:
type: NodePort
ports:
- name: http
port: 3100
targetPort: 3100
nodePort: 30100
selector:
app: loki
vim loki.yaml
---
apiVersion: v1
kind: Service
metadata:
name: loki-headless
namespace: thanos-monitoring
labels:
app: loki
spec:
type: ClusterIP
clusterIP: None
ports:
- port: 3100
protocol: TCP
name: http-metrics
targetPort: http-metrics
selector:
app: loki
---
apiVersion: v1
kind: Service
metadata:
name: loki
namespace: thanos-monitoring
labels:
app: loki
spec:
type: ClusterIP
ports:
- port: 3100
protocol: TCP
name: http-metrics
targetPort: http-metrics
selector:
app: loki
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: loki
namespace: thanos-monitoring
labels:
app: loki
spec:
podManagementPolicy: OrderedReady
replicas: 1
selector:
matchLabels:
app: loki
serviceName: loki-headless
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: loki
spec:
serviceAccountName: loki
securityContext:
fsGroup: 10001
runAsGroup: 10001
runAsNonRoot: true
runAsUser: 10001
initContainers: []
containers:
- name: loki
image: grafana/loki:2.0.0
imagePullPolicy: IfNotPresent
args:
- -config.file=/etc/loki/loki.yaml
volumeMounts:
- name: config
mountPath: /etc/loki
- name: storage
mountPath: /data
ports:
- name: http-metrics
containerPort: 3100
protocol: TCP
livenessProbe:
httpGet:
path: /ready
port: http-metrics
scheme: HTTP
initialDelaySeconds: 45
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /ready
port: http-metrics
scheme: HTTP
initialDelaySeconds: 45
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
securityContext:
readOnlyRootFilesystem: true
terminationGracePeriodSeconds: 4800
volumes:
- name: config
configMap:
defaultMode: 420
name: loki
volumeClaimTemplates:
- metadata:
name: storage
labels:
app: loki
spec:
storageClassName: managed-nfs-storage
accessModes:
- ReadWriteMany
resources:
requests:
storage: "2Gi"
vim loki-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: loki
namespace: thanos-monitoring
labels:
app: loki
data:
loki.yaml: |
auth_enabled: false
ingester:
chunk_idle_period: 3m # 如果块没有达到最大的块大小,那么在刷新之前,块应该在内存中不更新多长时间
chunk_block_size: 262144
chunk_retain_period: 1m # 块刷新后应该在内存中保留多长时间
max_transfer_retries: 0 # Number of times to try and transfer chunks when leaving before falling back to flushing to the store. Zero = no transfers are done.
lifecycler: #配置ingester的生命周期,以及在哪里注册以进行发现
ring:
kvstore:
store: inmemory # 用于ring的后端存储,支持consul、etcd、inmemory
replication_factor: 1 # 写入和读取的ingesters数量,至少为1(为了冗余和弹性,默认情况下为3)
limits_config:
enforce_metric_name: false
reject_old_samples: true # 旧样品是否会被拒绝
reject_old_samples_max_age: 168h # 拒绝旧样本的最大时限
schema_config: # 配置从特定时间段开始应该使用哪些索引模式
configs:
- from: 2023-04-12 # 创建索引的日期。如果这是唯一的schema_config,则使用过去的日期,否则使用希望切换模式时的日期
store: boltdb-shipper # 索引使用哪个存储,如:cassandra, bigtable, dynamodb,或boltdb
object_store: filesystem # 用于块的存储,如:gcs, s3, inmemory, filesystem, cassandra,如果省略,默认值与store相同
schema: v11
index: # 配置如何更新和存储索引
prefix: index_ # 所有周期表的前缀
period: 24h # 表周期
server:
http_listen_port: 3100
storage_config: # 为索引和块配置一个或多个存储
boltdb_shipper:
active_index_directory: /data/loki/boltdb-shipper-active
cache_location: /data/loki/boltdb-shipper-cache
cache_ttl: 24h
shared_store: filesystem
filesystem:
directory: /data/loki/chunks
chunk_store_config: # 配置如何缓存块,以及在将它们保存到存储之前等待多长时间
max_look_back_period: 0s #限制查询数据的时间,默认是禁用的,这个值应该小于或等于table_manager.retention_period中的值
table_manager:
retention_deletes_enabled: false # 日志保留周期开关,用于表保留删除
retention_period: 0s # 日志保留周期,保留期必须是索引/块的倍数
compactor:
working_directory: /data/loki/boltdb-shipper-compactor
shared_store: filesystem
标题创建promtail资源
vim loki-promtail-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: loki-promtail
labels:
app: promtail
namespace: thanos-monitoring
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
app: promtail
name: promtail-clusterrole
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- nodes
- nodes/proxy
- services
- endpoints
- pods
verbs: ["get", "watch", "list"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: promtail-clusterrolebinding
labels:
app: promtail
subjects:
- kind: ServiceAccount
name: loki-promtail
namespace: thanos-monitoring
roleRef:
kind: ClusterRole
name: promtail-clusterrole
apiGroup: rbac.authorization.k8s.io
vim loki-promtail.yaml
apiVersion: v1
kind: Service
metadata:
name: loki-promtail-headless
namespace: thanos-monitoring
labels:
app: promtail
spec:
clusterIP: None
ports:
- port: 3101
protocol: TCP
name: http-metrics
targetPort: http-metrics
selector:
app: promtail
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: loki-promtail
namespace: thanos-monitoring
labels:
app: promtail
spec:
selector:
matchLabels:
app: promtail
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app: promtail
spec:
serviceAccountName: loki-promtail
containers:
- name: promtail
image: grafana/promtail:2.0.0
imagePullPolicy: IfNotPresent
args:
- -config.file=/etc/promtail/promtail.yaml
- -client.url=http://loki:3100/loki/api/v1/push
env:
- name: HOSTNAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
volumeMounts:
- mountPath: /etc/promtail
name: config
- mountPath: /run/promtail
name: run
- mountPath: /var/lib/docker/containers
name: docker
readOnly: true
- mountPath: /var/log/pods
name: pods
readOnly: true
ports:
- containerPort: 3101
name: http-metrics
protocol: TCP
securityContext:
readOnlyRootFilesystem: true
runAsGroup: 0
runAsUser: 0
readinessProbe:
failureThreshold: 5
httpGet:
path: /ready
port: http-metrics
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
volumes:
- name: config
configMap:
defaultMode: 420
name: loki-promtail
- name: run
hostPath:
path: /run/promtail
type: ""
- name: docker
hostPath:
path: /var/lib/docker/containers
- name: pods
hostPath:
path: /var/log/pods
vim loki-promtail-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: loki-promtail
namespace: thanos-monitoring
labels:
app: promtail
data:
promtail.yaml: |
client: # 配置Promtail如何连接到Loki的实例
backoff_config: # 配置当请求失败时如何重试请求给Loki
max_period: 5m
max_retries: 10
min_period: 500ms
batchsize: 1048576 # 发送给Loki的最大批次大小(以字节为单位)
batchwait: 1s # 发送批处理前等待的最大时间(即使批次大小未达到最大值)
external_labels: {} # 所有发送给Loki的日志添加静态标签
timeout: 10s # 等待服务器响应请求的最大时间
positions:
filename: /run/promtail/positions.yaml
server:
http_listen_port: 3101
target_config:
sync_period: 10s
scrape_configs:
- job_name: kubernetes-pods-name
pipeline_stages:
- docker: {}
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels:
- __meta_kubernetes_pod_label_name
target_label: __service__
- source_labels:
- __meta_kubernetes_pod_node_name
target_label: __host__
- action: drop
regex: ''
source_labels:
- __service__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- action: replace
replacement: $1
separator: /
source_labels:
- __meta_kubernetes_namespace
- __service__
target_label: job
- action: replace
source_labels:
- __meta_kubernetes_namespace
target_label: namespace
- action: replace
source_labels:
- __meta_kubernetes_pod_name
target_label: pod
- action: replace
source_labels:
- __meta_kubernetes_pod_container_name
target_label: container
- replacement: /var/log/pods/*$1/*.log
separator: /
source_labels:
- __meta_kubernetes_pod_uid
- __meta_kubernetes_pod_container_name
target_label: __path__
- job_name: kubernetes-pods-app
pipeline_stages:
- docker: {}
kubernetes_sd_configs:
- role: pod
relabel_configs:
- action: drop
regex: .+
source_labels:
- __meta_kubernetes_pod_label_name
- source_labels:
- __meta_kubernetes_pod_label_app
target_label: __service__
- source_labels:
- __meta_kubernetes_pod_node_name
target_label: __host__
- action: drop
regex: ''
source_labels:
- __service__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- action: replace
replacement: $1
separator: /
source_labels:
- __meta_kubernetes_namespace
- __service__
target_label: job
- action: replace
source_labels:
- __meta_kubernetes_namespace
target_label: namespace
- action: replace
source_labels:
- __meta_kubernetes_pod_name
target_label: pod
- action: replace
source_labels:
- __meta_kubernetes_pod_container_name
target_label: container
- replacement: /var/log/pods/*$1/*.log
separator: /
source_labels:
- __meta_kubernetes_pod_uid
- __meta_kubernetes_pod_container_name
target_label: __path__
- job_name: kubernetes-pods-direct-controllers
pipeline_stages:
- docker: {}
kubernetes_sd_configs:
- role: pod
relabel_configs:
- action: drop
regex: .+
separator: ''
source_labels:
- __meta_kubernetes_pod_label_name
- __meta_kubernetes_pod_label_app
- action: drop
regex: '[0-9a-z-.]+-[0-9a-f]{8,10}'
source_labels:
- __meta_kubernetes_pod_controller_name
- source_labels:
- __meta_kubernetes_pod_controller_name
target_label: __service__
- source_labels:
- __meta_kubernetes_pod_node_name
target_label: __host__
- action: drop
regex: ''
source_labels:
- __service__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- action: replace
replacement: $1
separator: /
source_labels:
- __meta_kubernetes_namespace
- __service__
target_label: job
- action: replace
source_labels:
- __meta_kubernetes_namespace
target_label: namespace
- action: replace
source_labels:
- __meta_kubernetes_pod_name
target_label: pod
- action: replace
source_labels:
- __meta_kubernetes_pod_container_name
target_label: container
- replacement: /var/log/pods/*$1/*.log
separator: /
source_labels:
- __meta_kubernetes_pod_uid
- __meta_kubernetes_pod_container_name
target_label: __path__
- job_name: kubernetes-pods-indirect-controller
pipeline_stages:
- docker: {}
kubernetes_sd_configs:
- role: pod
relabel_configs:
- action: drop
regex: .+
separator: ''
source_labels:
- __meta_kubernetes_pod_label_name
- __meta_kubernetes_pod_label_app
- action: keep
regex: '[0-9a-z-.]+-[0-9a-f]{8,10}'
source_labels:
- __meta_kubernetes_pod_controller_name
- action: replace
regex: '([0-9a-z-.]+)-[0-9a-f]{8,10}'
source_labels:
- __meta_kubernetes_pod_controller_name
target_label: __service__
- source_labels:
- __meta_kubernetes_pod_node_name
target_label: __host__
- action: drop
regex: ''
source_labels:
- __service__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- action: replace
replacement: $1
separator: /
source_labels:
- __meta_kubernetes_namespace
- __service__
target_label: job
- action: replace
source_labels:
- __meta_kubernetes_namespace
target_label: namespace
- action: replace
source_labels:
- __meta_kubernetes_pod_name
target_label: pod
- action: replace
source_labels:
- __meta_kubernetes_pod_container_name
target_label: container
- replacement: /var/log/pods/*$1/*.log
separator: /
source_labels:
- __meta_kubernetes_pod_uid
- __meta_kubernetes_pod_container_name
target_label: __path__
- job_name: kubernetes-pods-static
pipeline_stages:
- docker: {}
kubernetes_sd_configs:
- role: pod
relabel_configs:
- action: drop
regex: ''
source_labels:
- __meta_kubernetes_pod_annotation_kubernetes_io_config_mirror
- action: replace
source_labels:
- __meta_kubernetes_pod_label_component
target_label: __service__
- source_labels:
- __meta_kubernetes_pod_node_name
target_label: __host__
- action: drop
regex: ''
source_labels:
- __service__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- action: replace
replacement: $1
separator: /
source_labels:
- __meta_kubernetes_namespace
- __service__
target_label: job
- action: replace
source_labels:
- __meta_kubernetes_namespace
target_label: namespace
- action: replace
source_labels:
- __meta_kubernetes_pod_name
target_label: pod
- action: replace
source_labels:
- __meta_kubernetes_pod_container_name
target_label: container
- replacement: /var/log/pods/*$1/*.log
separator: /
source_labels:
- __meta_kubernetes_pod_annotation_kubernetes_io_config_mirror
- __meta_kubernetes_pod_container_name
target_label: __path__
部署grafana
mkdir -p /data/prometheus/grafana && chmod 755 /data/prometheus/grafana
docker run -d -p 3000:3000 -v /data/prometheus/grafana:/var/lib/grafana --name=grafana grafana/grafana:latest
grafana容器配置文件:/etc/grafana/grafana.ini。
访问ip:3000,初始账号密码为admin/admin,会要求更改密码。
添加数据源loki:
grafana模板:https://grafana.com/grafana/dashboards,我这里使用ID: 13639
更多推荐
已为社区贡献6条内容
所有评论(0)