Linux 环境 离线 ELK7.4.0之Filebeat服务安装(直连ElasticSearch)
文章目录解压安装包配置属性服务启动服务运行情况检查使用过程中出现的问题too many open files自定义的索引规则不生效使用自定义index时template 属性必须设置参考资料解压安装包cd /home/hsyt/jenkins/filebeattar -xvf filebeat-7.4.0-linux-x86_64.tar.gz # 解压文件到当前目录,可以通过 -C 来指定...
·
解压安装包
cd /home/hsyt/jenkins/filebeat
tar -xvf filebeat-7.4.0-linux-x86_64.tar.gz # 解压文件到当前目录,可以通过 -C 来指定解压目录
配置属性
完整的样例(针对修改的部分内容提供,默认的内容就不粘贴)
#=========================== Filebeat inputs =============================
#配置样例
filebeat.inputs
- type: log
enabled: true
paths:
#- /var/log/*.log
- /home/logs/sync-*/**/*
fields:
indexprefix: uat-sync
encoding: utf-8
multiline.pattern: ^\[
multiline.negate: true
multiline.match: after
exclude_files: ['.gz$','.tar$','infra','tracelog'] ']
- type: log
enabled: true
paths:
#- /var/log/*.log
- /home/logs/**/tracelog/dubbo*.log
fields:
indexprefix: tracelog-sim
encoding: utf-8
exclude_files: ['.gz$','.tar$']
processors:
- decode_json_fields:
fields: ["time","stat.key", "count", "total.cost.milliseconds","success"]
process_array: true
max_depth: 5
target:
overwrite_keys: true
add_error_key: true
json.keys_under_root: false
json.add_error_key: true
json.overwrite_keys: true
output.console.pretty: true
#==================== Elasticsearch template setting ==========================
setup.template:
enabled: true
name: "uat-filebeat"
pattern: "uat-filebeat-*"
#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["192.168.1.1:9200"]
index: "%{[fields][indexprefix]}-filebeat-%{[agent.version]}-%{+yyyy.MM.dd}"
#============================== Kibana =====================================
# Kibana Host
setup.kibana:
host: "192.168.1.1:5601"
filebeat.yml重要片段
index-lifecycle-management(ILM)策略 ILM策略这块在目前版本yml文件里没有说明,是通过filebeat的启动日志和官网的资料找到
关闭iml策略,默认情况下,filebeat会根据这个策略来生成 index,这个index关系到后续再Kibana上查看数据时的数据过滤,可以根据要求自行设置是否关闭。在关闭IML策略的情况下必须设置Elasticsearch template setting,否则启动会报错,这种情况下Elasticsearch output中的自定义index规则才生效
#setup.ilm.enable: false
#默认ilm配置策略
#setup.ilm.enabled: auto
#setup.ilm.rollover_alias: "filebeat"
#setup.ilm.pattern: "{now/d}-000001"
#==================== Elasticsearch template setting ==========================
#setup.template:
# enabled: true
这里的template和下面自定义的es index不匹配也没关系,不影响使用,按照目前的demo,这部分配置纯粹是为了解决程序启动报错,实际的index规则在下面来确定
# name: "uat-filebeat"
# pattern: "uat-filebeat-*"
#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
# Array of hosts to connect to.
hosts部分支持数组形式,类似["192.168.1.1:9200","192.168.1.2:9200"]
# hosts: ["192.168.1.1:9200"]
%{[fields][indexprefix]} 获取上面自定义fields属性值
# index: "%{[fields][indexprefix]}-filebeat-%{[agent.version]}-%{+yyyy.MM.dd}"
kibana上看到的索引列表,kibana部分设置和使用会再提供详细的使用说明文档Index Pattern
filebeat.inputs 业务服务部分配置,目前filebeat没有针对业务服务的专门的module,因此就用log类型来表示。
#=========================== Filebeat inputs =============================
#filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
下面的元素可以是个列表,每一个输入元素用-的列表来表示
#- type: log
# Change to true to enable this input configuration.
想要使用的话需要设置开启
#enabled: false
# Paths that should be crawled and fetched. Glob based paths.
日志路径支持配置多条,每一条开头用-,也可以用**表示通配,默认会往下递归9层来找到匹配格式的文件
#paths:
#- /var/log/*.log
#- /home/logs/**/*.log
#- c:\programdata\elasticsearch\logs\*
#encoding: utf-8
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ['^DBG']
# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
#include_lines: ['^ERR', '^WARN']
# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
根据文件名来做过滤
#exclude_files: ['.gz$']
# Optional additional fields. These fields can be freely picked
# to add additional information to the crawled log files for filtering
额外的自定义属性,可以在配置中使用%{[fields][level]}来获取,也会体现在传入ES的数据上,方便做数据过滤
#fields:
# level: debug
# review: 1
processors:
- decode_json_fields:
#待处理json中某些属性(可以针对文件数据原则获取哪个json内容)
fields: ["time","stat.key", "count", "total.cost.milliseconds","success"]
process_array: true
max_depth: 5
target:
overwrite_keys: true
add_error_key: true
#控制这部分json数据在kibana上看到的节点位置
json.keys_under_root: false
json.add_error_key: true
json.overwrite_keys: true
output.console.pretty: true
#当前配置对应文件中数据格式
{
"time": "2019-09-17 17:47:24.918",
"stat.key": {
"method": "handle",
"remote.app": "mhp.rpc.pat",
"service": "com.dap.api.IService"
},
"count": 6,
"total.cost.milliseconds": 34,
"success": "Y"
}
对应kibana上看到数据的结构
# Multiline can be used for log messages spanning multiple lines. This is common
# for Java Stack Traces or C-Line Continsimion
# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
匹配行首的字符,这里支持正则匹配,根据实际情况设置
#multiline.pattern: ^\[
# Defines if the pattern set under pattern should be negated or not. Default is false.
如果存在一条日志有多行的情况(异常轨迹),需要设置开启同时配置好相应的规则
#multiline.negate: false
# Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
# that was (not) matched before or after or as long as a pattern is not matched based on negate.
# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
#multiline.match: after
服务启动
cd filebeat-7.4.0-linux-x86_64
#非后台启动服务方式
./filebeat -e -c filebeat.yml
#后台启动服务方式
nohup ./filebeat -e -c filebeat.yml &
服务运行情况检查
#查看服务进程
ps -ef|grep filebeat
#日志信息查看
nohup 方式启动以后原先控制台的日志输出在运行脚本的目录下的nohup.out文件
tail -fn200 /home/elk/filebeat/filebeat-7.4.0-linux-x86_64/nohup.out
使用过程中出现的问题
too many open files
2019-10-21T14:11:29.223+0800 ERROR registrar/registrar.go:416 Failed to create tempfile (/home/hsyt/jenkins/filebeat/filebeat-7.4.0-linux-x86_64/data/registry/filebeat/data.json.new) for writing: open /home/hsyt/jenkins/filebeat/filebeat-7.4.0-linux-x86_64/data/registry/filebeat/data.json.new: too many open files
由于首次批量的将历史的数据导入到elasticSearch中,filebeat同时打开的文件数目超过限制,所以出现这类错误。
- 解决思路一: 删除历史的无用的部分信息,filebeat每次只进行增量的日志数据同步,这样确保每次打开的文件数目不超过限制数。
- 解决思路二:修改系统的相关限制数
自定义的索引规则不生效
按照官网介绍,当没有设置自定义的索引规则时会默认使用IML,但是实际使用的时候设置了索引还是会使用默认的IML规则
使用自定义index时template 属性必须设置
参考资料
这篇文章绝大多数内容都是从官网的开发文档中找到依据,少量的参考其他资料,结合实际验证而来.官网的相关资料都在上文相应的位置做了链接,此处不再重复列举,提供一个官网地址涵盖全部官网的参考资料。
更多推荐
已为社区贡献1条内容
所有评论(0)