ELK+FileBeat安装使用详解(filebeat配置多行合并、发向多个logstash、ssl加密、字段添加、运行日志、压缩、限制内存、注册服务等)#
目录1.环境准备:2.ElasticSearch3.Logstash5.FilebeatFilebeat其他配置filebeat完整配置:多行合并日志字段添加压缩filebeat运行日志限制内存发向多个logstashfilebeat->>logstash通过ssl加密Ssl测试断网测试注册服务filebeat拆分日志踩坑:提醒:1.环境准备:1.1测试机PC1IP:192.168.1
目录
1.环境准备:
1.1测试机PC1
IP:192.168.1.99
服务器环境:Linux Centos 6.10
Java环境:java 1.7
安装:filebeat
lsb_release -a
Java -version
1.2测试级PC2:
IP:192.168.1.120
服务器环境:win7
Java环境:java 1.11
安装:ElasticSearch + Logstash + Kibana
2.ElasticSearch
2.1下载-解压
https://www.elastic.co/cn/elasticsearch/
2.2运行:/bin/ElasticSearch.bat
3.Logstash
3.1下载-解压
https://www.elastic.co/cn/downloads/logstash
3.2新建conf脚本
/bin/conf/test.conf
3.3运行:/bin中打开命令行
logstash -f conf/test.conf
4.Kibana
4.1.下载-解压
https://www.elastic.co/cn/downloads/kibana
4.2.运行:/bin/Kibana.bat
4.3.浏览器打开http://127.0.0.1:5601
4.4.Management-Index Patterns-Create index pattern-索引模式名:test -Next step-Time Filter field name下拉菜单中选择@timestamp-Create index pattern
4.5.索引创建完成后点击discover
5.Filebeat
5.1下载
https://www.elastic.co/cn/downloads/beats/filebeat
5.2解压: tar xvf filebeat.tar.gz -C /opt
5.3 [root@infosec ]#Cd /opt/filebeat
5.4配置/opt/filebeat/filebeat.yml
ps:下面有完整配置可参考
设置权限:
[root@infosec filebeat]# chmod go-w /opt/filebeat/filebeat.yml
5.5测试
测试配置:[root@infosec filebeat]# /opt/filebeat/./filebeat test config
测试连接:[root@infosec filebeat]# /opt/filebeat/./filebeat test output
5.6运行:[root@infosec filebeat]# /opt/filebeat/./filebeat -e -c filebeat.yml
后台启动:nohup /opt/filebeat/./filebeat -e -c filebeat.yml >/opt/filebeat/logs/filebeat.log &
查看进程: ps -e | grep filebeat
Filebeat其他配置
filebeat完整配置:
往下翻有具体介绍
filebeat.inputs:
- type: log
enabled: true
paths:
- /home/test/log/test.log
fields:
sn: 123
multiline.pattern: '^[I,]|^[D,]|^[E,]|^[W,]|^[#]'
multiline.negate: true
multiline.match: after
multiline.max_lines: 50
multiline.timeout: 10s
output.logstash:
hosts: ["127.0.0.1:5050"]
ssl.certificate_authorities: ["logstash端传来的证书所在位置"]
ssl.certificate: "本端生成的证书所在的位置"
ssl.key: "本端生成的密钥所在的位置"
compression_level: 5
bulk_max_size: 2048
timeout: 20
queue.mem:
events: 2048
flush.min_events: 1024
flush.timeout: 5s
max_procs: 1
logging.level: info
logging.to_files: true
logging.files:
path: /home/filebeat/logs
name: filebeat
keepfiles: 7
permissions: 0644
多行合并
filebeat.inputs下
multiline.pattern: '^[I,]|^[D,]|^[E,]|^[W,]|^[#]'
multiline.negate: true
multiline.match: after
multiline.max_lines: 50
multiline.timeout: 10s
解释:
不以I,或D,或E,或W,或#开头的合并到上一行末尾,最多50行,超过10秒直接发送
pattern:正则表达式
negate:true 或 false;默认是false,匹配pattern的行合并到上一行;true,不匹配pattern的行合并到上一行
match:after 或 before,合并到上一行的末尾或开头
问题:多行匹配占用大量cpu
日志字段添加
filebeat.inputs下
fields:
sn: 123
发送的日志都会带上此字段
压缩
output.logstash下
compression_level: 5
压缩等级5,效果:10倍压缩
filebeat运行日志
logging.level: info
logging.to_files: true
logging.files:
path: /opt/filebeat/logs
name: filebeat
keepfiles: 7
permissions: 0644
限制内存
queue.mem:
events: 2048
flush.min_events: 1024
flush.timeout: 5s
其中queue.mem.events要等于output.logstash.bulk_max_size
queue.mem.flush.min_events是queue.mem.events的一半
发向多个logstash
方案一:
output.logstash:
hosts: ["192.168.0.11:5044","192.168.0.22:5044"]
loadbalance: true
#loadbalance: true开启负载平衡技术
结果:无用,被负载均衡
方案二:
启动两个filebeat.yml分别发向不同的hosts
filebeat.yml输出配置:
output.logstash:
hosts: ["192.168.0.11:5044"]
启动:nohup /home/filebeat/./filebeat -c /home/filebeat/filebeat.yml >/dev/null 2>&1 &
filebeat1.yml输出配置:
output.logstash:
hosts: ["192.168.0.22:5044"]
启动:nohup /home/filebeat/./filebeat -c /home/filebeat/filebeat1.yml -path.data /home/filebeat/data2/ -path.logs /home/filebeat/logs2/ >/dev/null 2>&1 &
结果:成功
filebeat->>logstash通过ssl加密
filebeat端
1、配置openssl.cnf文件
在[ v3_ca ]下面加入
subjectAltName = IP:本机的IP
2、配置证书和密钥
mkdir -p pki/tls/certs
mkdir -p pki/tls/private
openssl req -subj ‘/CN=YOURIP/’ -x509 -days $((100 * 365)) -batch -nodes -newkey rsa:2048 -keyout pki/tls/private/filebeat.key -out pki/tls/certs/filebeat.crt
3、将生成的证书和密钥copy到logstash端生成的证书和密钥的文件夹内
4、配置filebeat.yml文件
ssl.certificate_authorities: ["logstash端传来的证书所在位置"]
ssl.certificate: "本端生成的证书所在的位置"
ssl.key: "本端生成的密钥所在的位置"
logstash端
1、配置openssl.cnf文件
在[ v3_ca ]下面加入
subjectAltName = IP:本机的IP
2、配置证书和密钥
mkdir -p pki/tls/certs
mkdir -p pki/tls/private
openssl req -subj ‘/CN=YOURIP/’ -x509 -days $((100 * 365)) -batch -nodes -newkey rsa:2048 -keyout pki/tls/private/logstash.key -out pki/tls/certs/logstash.crt
3、将生成的证书和密钥copy到filebeat端生成的证书和密钥的文件夹内
4、修改logstash的配置文件
ssl => true
ssl_certificate_authorities => ["filebeat端传来的证书所在位置"]
ssl_certificate => "本端生成的证书所在的位置"
ssl_key => "/本端生成的密钥所在的位置"
ssl_verify_mode => "force_peer"
解释:
ssl_verify_mode:none不验证、peer若客户端有证书则验证、force_peer强制验证,客户端无证书则关闭连接
此配置针对于没有域名的主机,如果有域名则省略第一步,不需要在openssl.cnf加入ip,但如果您没有域名,只能使用ip的话,就必须采取一下配置
Ssl测试
1.客户端ssl传输,服务端ssl_verify_mode => “none”
结果:接收成功
2.客户端ssl传输,服务端强制验证,不配filebeat端证书
ssl_verify_mode => “force_peer”
结果:接收失败
3.客户端ssl传输,服务端强制验证并配非filebeat端证书
ssl_verify_mode => “force_peer”
ssl_certificate_authorities => [“D:/software/ELK/logstash-7.7.1/zy_svrcert.pem”]
结果:验证失败
4.客户端ssl传输,服务端强制验证并配filebeat端证书
ssl_verify_mode => “force_peer”
ssl_certificate_authorities => [“D:/software/ELK/logstash-7.7.1/zy_cli_1_cert.pem”]
结果:接收成功
断网测试
Filebeat监控测试日志:每行4MB,共10行
1.不断网
结果:接收成功
2.不断网,logstash关闭
结果:filebeat端cpu内存急剧降低
3.断网
结果:filebeat会在运行日志中记录下已发出去的行,联网后从此行继续发
注册服务filebeat
cp /root/machine/install/filebeat /etc/init.d/filebeat
chmod +x /etc/init.d/filebeat
chkconfig --add filebeat
chkconfig --list filebeat
文件:filebeat:
#!/bin/sh
current_user=$(whoami)
user=${current_user:0:7}
case "$1" in
start)
nohup /home/filebeat/./filebeat -c /home/filebeat/filebeat.yml >/dev/null 2>&1 &
nohup /home/filebeat/./filebeat -c /home/filebeat/filebeat1.yml -path.data /home/filebeat/data2/ -path.logs /home/filebeat/logs2/ >/dev/null 2>&1 &
;;
start1)
PID=$(ps -ef | grep filebeat.yml | grep $user | grep -v "grep" | awk '{print $2}')
if [ -z "$PID" ] ;then
echo "start filebeat.yml"
nohup /home/filebeat/./filebeat -c /home/filebeat/filebeat.yml >/dev/null 2>&1 &
else
echo "filebeat.yml is running"
fi
;;
stop1)
PID=$(ps -ef | grep filebeat.yml | grep $user | grep -v "grep" | awk '{print $2}')
if [ -z "$PID" ] ;then
echo "filebeat.yml is not running"
else
echo "stop filebeat.yml"
kill -9 $PID
fi
;;
restart1)
PID=$(ps -ef | grep filebeat.yml | grep $user | grep -v "grep" | awk '{print $2}')
if [ -z "$PID" ] ;then
echo "start filebeat.yml"
nohup /home/filebeat/./filebeat -c /home/filebeat/filebeat.yml >/dev/null 2>&1 &
else
echo "stop filebeat.yml"
kill -9 $PID
echo "start filebeat.yml"
nohup /home/filebeat/./filebeat -c /home/filebeat/filebeat.yml >/dev/null 2>&1 &
fi
;;
check1)
PID=$(ps -ef | grep filebeat.yml | grep $user | grep -v "grep" | awk '{print $2}')
if [ -z "$PID" ] ;then
echo "filebeat.yml is not running"
else
echo "filebeat.yml is running"
fi
;;
start2)
PID=$(ps -ef | grep filebeat1.yml | grep $user | grep -v "grep" | awk '{print $2}')
if [ -z "$PID" ] ;then
echo "start filebeat1.yml"
nohup /home/filebeat/./filebeat -c /home/filebeat/filebeat1.yml -path.data /home/filebeat/data2/ -path.logs /home/filebeat/logs2/ >/dev/null 2>&1 &
else
echo "filebeat1.yml is running"
fi
;;
stop2)
PID=$(ps -ef | grep filebeat1.yml | grep $user | grep -v "grep" | awk '{print $2}')
if [ -z "$PID" ] ;then
echo "filebeat1.yml is not running"
else
echo "stop filebeat1.yml"
kill -9 $PID
fi
;;
restart2)
PID=$(ps -ef | grep filebeat1.yml | grep $user | grep -v "grep" | awk '{print $2}')
if [ -z "$PID" ] ;then
echo "start filebeat1.yml"
nohup /home/filebeat/./filebeat -c /home/filebeat/filebeat1.yml -path.data /home/filebeat/data2/ -path.logs /home/filebeat/logs2/ >/dev/null 2>&1 &
else
echo "stop filebeat1.yml"
kill -9 $PID
echo "start filebeat1.yml"
nohup /home/filebeat/./filebeat -c /home/filebeat/filebeat1.yml -path.data /home/filebeat/data2/ -path.logs /home/filebeat/logs2/ >/dev/null 2>&1 &
fi
;;
check2)
PID=$(ps -ef | grep filebeat1.yml | grep $user | grep -v "grep" | awk '{print $2}')
if [ -z "$PID" ] ;then
echo "filebeat1.yml is not running"
else
echo "filebeat1.yml is running"
fi
;;
*)
echo $"Usage: $0 {start|start1|stop1|restart1|start2|stop2|restart2|check1|check2}"
esac
拆分日志
可在filebeat.yml中配置,此处为笔者不知道可配的情况下使用
制作定时任务
*/5 * * * * /home/machine/filebeat/filebeat_task.sh
filebeat_task.sh:
this_path="/home/filebeat/logs"
cd $this_path
echo $this_path
current_date=`date -d "-1 day" "+%Y%m%d"`
echo $current_date
split -b 100k -d -a 4 this_path/filebeat.log this_path/filebeat.log_${current_date}_
cat /dev/null > nohup.out
踩坑:
loadbalance: true
负载平衡,只往一个地址发。。。发给189就不发给240,发给240就不发给189
解决:开启两个filebeat
提醒:
启动顺序:PC2:ElasticSearch-Logstash-Kibana-PC1:Filebeat
更多推荐
所有评论(0)