1. 配置文件位置

对于rpmdeb,您将在以下位置找到配置文件/etc/filebeat/filebeat.yml。在Docker下,它位于/usr/share/filebeat/filebeat.yml。对于macwin以及zip文档,请查看刚刚提取的存档。相同路径下还有一个名为的完整示例配置文件filebeat.reference.yml,显示了所有未弃用的配置选项。

  1. 读取日志配置:
  • 项目日志文件

    利用 Filebeat 去读取文件,paths 下面配置路径地址。 /data/share/business_log/TA-*/debug.log Filebeat 会自动去读取business_log里面的TA开头的文件。可以使用 Linux 的 通配符 对文件名进行匹配,找到需要的文件名

    #=========================== Filebeat prospectors =============================
     
    filebeat.prospectors:
     
    # Each - is a prospector. Most options can be set at the prospector level, so
    # you can use different prospectors for various configurations.
    # Below are the prospector specific configurations.
    
    # 设置输入的type为log
    - type: log
     
      # Change to true to enable this prospector configuration.
      enabled: true
     
      # Paths that should be crawled and fetched. Glob based paths.
      paths:
        #- /usr/local/server/openresty/nginx/logs/*.log
        - /data/share/business_log/TA-*/debug.log
        #- c:\programdata\elasticsearch\logs\*
    

    filebeat 对于多行日志的处理,需要处理多行日志的情况下

    # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
    multiline:
        pattern: '^[0-2][0-9]:[0-5][0-9]:[0-5][0-9]'
        negate: true
        match: after
    

    上面配置的意思是:不以时间格式开头的行都合并到上一行的末尾(正则写的不好,忽略忽略)
    pattern:正则表达式
    negate:true 或 false;默认是false,匹配pattern的行合并到上一行;true,不匹配pattern的行合并到上一行
    match:after 或 before,合并到上一行的末尾或开头
    还有更多两个配置,默认也是注释的,没特殊要求可以不管它
    max_lines: 500
    timeout: 5s
    max_lines:合并最大行,默认500
    timeout:一次合并事件的超时时间,默认5s,防止合并消耗太多时间甚至卡死

  • nginx日志文件

    #=========================== Filebeat prospectors =============================
     
    filebeat.prospectors:
     
    # Each - is a prospector. Most options can be set at the prospector level, so
    # you can use different prospectors for various configurations.
    # Below are the prospector specific configurations.
     
     - type: log
     
      # Change to true to enable this prospector configuration.
      enabled: true
     
      # Paths that should be crawled and fetched. Glob based paths.
      paths:
        - /usr/local/server/openresty/nginx/logs/access.log
        - /usr/local/server/openresty/nginx/logs/error.log
        #- /data/share/business_log/TA-*/debug.log
        #- c:\programdata\elasticsearch\logs\*
    
  • 输出配置

    我们需要输出到 Logstash 里面,注释掉 Elasticsearch 下面的配置项,并配置 Logstash 下面的配置,会将 Filebeat 读取到的日志文件发送到 hosts 里面配置的 Logstash 服务器上面去

    #----------------------------- Logstash output --------------------------------
    output.logstash:
      # The Logstash hosts
      # Logstash 不会组成集群,但是 Filebeat 会自己去轮询 Logstash 的服务器,去找到可用的 Logstash 服务器发送过去
      hosts: ["172.18.1.152:5044","172.18.1.153:5044","172.18.1.154:5044"]
      index: "logstash-%{+yyyy.MM.dd}"
     
      # Optional SSL. By default is off.
      # List of root certificates for HTTPS server verifications
      #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
     
      # Certificate for SSL client authentication
      #ssl.certificate: "/etc/pki/client/cert.pem"
     
      # Client Certificate Key
      #ssl.key: "/etc/pki/client/cert.key"
    

Filebeat 启动命令:nohup ./filebeat -e -c filebeat-TA.yml >/dev/null 2>&1 &
Filebeat 可以启动多个,通过不同的 *-Filebeat.yml 配置文件启动

Logo

权威|前沿|技术|干货|国内首个API全生命周期开发者社区

更多推荐