【k8s】日志处理 (主动推送和异步收集)
1)、先配置Outputs/ClusterOutputs,分别填写 名称,es ip port ,以及日志的索引名称。2)配置Flows/ClusterFlows,配置名称通过标签选择器 设置需要收集日志的应用pod。4)注意:日志是按照行收集的,一行一条数据。1、主动推送,graylog /skywalking。2、异步收集,ELK /EFK/Ranch 日志收集方式。springboot整合s
k8s中应用程序日志处理方案
1、主动推送 ,graylog /skywalking
2、异步收集,ELK /EFK/ logging operator(Ranch 日志收集方式)(统一打印日志可以参考)
1、主动推送
springboot整合graylog
<dependency>
<groupId>de.siegmar</groupId>
<artifactId>logback-gelf</artifactId>
<version>3.0.0</version>
</dependency>
<appender name="GELF"
class="de.siegmar.logbackgelf.GelfUdpAppender">
<graylogPort>xxxx</graylogPort> <!-- graylog 日志接收端口 -->
<graylogHost>graylog.default</graylogHost> <!-- graylog server地址 -->
<maxChunkSize>508</maxChunkSize>
<useCompression>true</useCompression>
<encoder class="de.siegmar.logbackgelf.GelfEncoder">
<includeRawMessage>false</includeRawMessage>
<includeMarker>true</includeMarker>
<includeMdcData>true</includeMdcData>
<includeCallerData>false</includeCallerData>
<includeRootCauseData>false</includeRootCauseData>
<includeLevelName>true</includeLevelName>
<shortPatternLayout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.TraceIdPatternLogbackLayout">
<Pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%X{tid}] [%thread] %-5level %logger{36} -%msg%n</Pattern>
</shortPatternLayout>
<fullPatternLayout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.TraceIdPatternLogbackLayout">
<Pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%X{tid}] [%thread] %-5level %logger{36} -%msg%n</Pattern>
</fullPatternLayout>
<staticField>app_env:${springAppEnv}</staticField>
<staticField>app_name:${springAppName}</staticField>
<staticField>os_arch:${os.arch}</staticField>
<staticField>os_name:${os.name}</staticField>
<staticField>os_version:${os.version}</staticField>
</encoder>
</appender>
springboot整合 skywalking日志
<dependency>
<groupId>org.apache.skywalking</groupId>
<artifactId>apm-toolkit-logback-1.x</artifactId>
<version>9.1.0</version>
</dependency>
<dependency>
<groupId>org.apache.skywalking</groupId>
<artifactId>apm-toolkit-trace</artifactId>
<version>9.1.0</version>
</dependency>
<appender name="SKY-GRPC" class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.log.GRPCLogClientAppender">
<encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder">
<layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.mdc.TraceIdMDCPatternLogbackLayout">
<Pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%X{tid}] [%thread] %-5level %logger{36} -%msg%n</Pattern>
</layout>
</encoder>
</appender>
完整logback-spring.xml
<?xml version="1.0" encoding="UTF-8"?>
<configuration debug="true">
<!--定义日志文件的存储地址 勿在 LogBack 的配置中使用相对路径 -->
<springProperty scope="context" name="springAppName" source="spring.application.name"/>
<springProperty scope="context" name="springAppEnv" source="spring.profiles.active"/>
<property name="log.path" value="logs/${springAppName}"/>
<property name="LOG_PATTERN" value="%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread][%-5level][logger_name:%logger{36}][%tid][mdc_tid:%X{tid}]-message:%msg%n"/>
<!-- 控制台输出 -->
<appender name="STDOUT"
class="ch.qos.logback.core.ConsoleAppender">
<encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder">
<layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.TraceIdPatternLogbackLayout">
<pattern>${LOG_PATTERN}</pattern>
</layout>
</encoder>
</appender>
<!-- 按照每天生成日志文件 -->
<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<!--日志文件输出的文件名 -->
<FileNamePattern>${log.path}/info.%d{yyyy-MM-dd}.%i.log</FileNamePattern>
<!--日志文件保留天数 -->
<MaxHistory>30</MaxHistory>
<!--设置日志文件大小方式1-->
<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
<maxFileSize>100MB</maxFileSize>
</timeBasedFileNamingAndTriggeringPolicy>
</rollingPolicy>
<encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder">
<layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.TraceIdPatternLogbackLayout">
<pattern>${LOG_PATTERN}</pattern>
</layout>
</encoder>
</appender>
<appender name="GELF"
class="de.siegmar.logbackgelf.GelfUdpAppender">
<graylogPort>xxxxx</graylogPort> <!-- graylog 日志接收端口 -->
<graylogHost>xxx.xxx.xxx.xxx</graylogHost> <!-- graylog 日志接收IP -->
<maxChunkSize>508</maxChunkSize>
<useCompression>true</useCompression>
<encoder class="de.siegmar.logbackgelf.GelfEncoder">
<includeRawMessage>false</includeRawMessage>
<includeMarker>true</includeMarker>
<includeMdcData>true</includeMdcData>
<includeCallerData>false</includeCallerData>
<includeRootCauseData>false</includeRootCauseData>
<includeLevelName>true</includeLevelName>
<shortPatternLayout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.TraceIdPatternLogbackLayout">
<pattern>${LOG_PATTERN}</pattern>
</shortPatternLayout>
<fullPatternLayout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.TraceIdPatternLogbackLayout">
<pattern>${LOG_PATTERN}</pattern>
</fullPatternLayout>
<staticField>app_env:${springAppEnv}</staticField>
<staticField>app_name:${springAppName}</staticField>
<staticField>os_arch:${os.arch}</staticField>
<staticField>os_name:${os.name}</staticField>
<staticField>os_version:${os.version}</staticField>
</encoder>
</appender>
<appender name="SKY-GRPC" class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.log.GRPCLogClientAppender">
<encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder">
<layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.TraceIdPatternLogbackLayout">
<pattern>${LOG_PATTERN}</pattern>
</layout>
</encoder>
</appender>
<!-- 根据不同的换将设置不同日志输出级别 -->
<springProfile name="test">
<root level="INFO">
<appender-ref ref="GELF"/>
<appender-ref ref="FILE"/>
<appender-ref ref="STDOUT"/>
<appender-ref ref="SKY-GRPC"/>
</root>
</springProfile>
<springProfile name="dev">
<root level="INFO">
<appender-ref ref="GELF"/>
<appender-ref ref="FILE"/>
<appender-ref ref="STDOUT"/>
<appender-ref ref="SKY-GRPC"/>
</root>
</springProfile>
<springProfile name="pro">
<root level="INFO">
<appender-ref ref="GELF"/>
<appender-ref ref="FILE"/>
<appender-ref ref="STDOUT"/>
<appender-ref ref="SKY-GRPC"/>
</root>
</springProfile>
</configuration>
2、rancher 中的日志收集,
参考了博文 RKE2部署Kubernetes(四)rancher2.7.9日志管理Logging Operator
实践配置,输出到elasticsearch
按照博文中出现日志选项卡
1)、先配置Outputs/ClusterOutputs,分别填写 名称,es ip port ,以及日志的索引名称(不填写默认是fluentd)
2)配置Flows/ClusterFlows,配置名称 通过标签选择器 设置需要收集日志的应用pod
配置关联上 先添加的Outputs/ClusterOutputs
3) kibana 查看收集的日志。
4)注意:此时日志是按照行收集的,一行一条数据。所以如果设置日志级别为DEBUG及以下的时候日志量会相当庞大
5)增加多行日志合并的处理。日志级别为DEBUG及以下的时候日志量依旧很大,需要定时清理日志
打开Flows/ClusterFlows 选择过滤选项卡,输入以下内容,multiline_start_regexp: 后边的内容是正则表达式匹配日志格式
日志格式示例
2024-01-09 10:17:26.513 [http-nio-9201-exec-1][INFO ][logger_name:c.w.s.c.DataDisplayController][TID:bed8b9f902af42128bd898abe4265856.57.17047666463410039][mdc_tid:]-message:xxxxxxxx方法时长为139:
- concat:
multiline_start_regexp: ^(\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}(,|.)\d{3})..*
k8s pod 中的日志 多行日志需要进行合并,将多行数据合并成一条数据
设置多行数据合并之前,可以看到Es收集到的数据是一行一条数据
合并后 收集到Es实现了将多行日志收集到一条数据里
可能遇到的问题
1、修改了多行日志匹配规则但是收集到的日志没有合并。
解决办法:1、去查看一下fluentd.conf 是否修改生效 2、更新部署logging opperator插件
查看fluentd.conf 是否修改生效
1)先找到工作负载
2)
3)查看fluentd.conf 配置,多行日志合并是否和在flow 中配置一致
fluentd.conf 配置和flow 不一致,
编辑,一路下一步(无需更改任何配置),最后更新。然后rancher 会重新更新部署logging operator。然后再去核实下fluentd.conf 。
更多推荐
所有评论(0)