一、docker-compose.yml 配置文件

vim docker-compose.yml
version: '2'
services:
mongodb:
container_name: mongo
image: mongo:3
volumes:
- mongo_data:/data/db
elasticsearch:
container_name: es
image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.5
volumes:
- es_data:/usr/share/elasticsearch/data
environment:
- TZ=Asia/Shanghai
- http.host=0.0.0.0
- transport.host=localhost
- network.host=0.0.0.0
- "ES_JAVA_OPTS=-Xms1024m -Xmx1024m"
ports:
- 9200:9200
ulimits:
memlock:
soft: -1
hard: -1
mem_limit: 4g
graylog:
container_name: graylog
image: graylog/graylog:3.3
volumes:
- graylog_journal:/usr/share/graylog/data/journal
- ./graylog/config:/usr/share/graylog/data/config
environment:
- GRAYLOG_PASSWORD_SECRET=somepasswordpepper
- GRAYLOG_ROOT_PASSWORD_SHA2=8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918
- GRAYLOG_HTTP_EXTERNAL_URI=http://10.2.33.101:9000/ 
- TZ=Asia/Shanghai
links:
- mongodb:mongo
- elasticsearch
depends_on:
- mongodb
- elasticsearch
ports:
- 9000:9000
- 1514:1514
- 1514:1514/udp
- 12201:12201
- 12201-12205:12201-12205/udp
volumes:
mongo_data:
driver: local
es_data:
driver: local
graylog_journal:
driver: local
docker-compose up -d
访问http://10.2.33.101:9000/ 
admin admin

 

二、使用示例:

 1、简单使用tcp收集txt日志,web日志类型

Node 节点 ,例上文中docker启动中  - 12201:12201 开启的端口
Inputs 建立输入:system/inputs/(select input-> Raw/Plaintext TCP )/ Launch new input  /(配置如下 主题&端口 其它默认即可) / save

{
"extractors": [
{
"title": "commonapache",
"extractor_type": "grok",
"converters": [],
"order": 0,
"cursor_strategy": "copy",
"source_field": "message",
"target_field": "",
"extractor_config": {
"grok_pattern": "%

Unknown macro: {COMMONAPACHELOG}
"
},
"condition_type": "none",
"condition_value": ""
}
],
"version": "3.3.16"
}

 

2、 在服务器上导入日志即可在graylog中显示
cat messages | nc localhost 12201

Graylog Collector Sidecar 是一个轻量级的日志采集器,通过访问graylog进行集中式管理。
Sidecar 守护进程会定期访问graylog的REST API 接口获取Sidecar配置文件中定义的标签tag,再首次运行时会从graylog服务器拉取配置文件中制定的配置信息同步到本地
目前支持NXLog Filebeat Winlogbeat。 他们通过graylog中的web界面进行统一配置,支持Beats CEF Gelf Json API NetFlow等输出类型

三、配置Graylog Collector Sidecar 采集nginx日志

1、graylog服务器端配置:

System -> Collectors(legcay) -> Manage Configurations -> Create configuration

在输入栏写入配置文件的名字linux,表示这个是用来收集linux主机日志的配置文件,save保存

 点击“linux”进行配置 Create Output ,Output主要定义的是日志的类型FileBeat以及它要流入的目标服务器hosts(graylog-ip)-save保存。例如寄快递要写接收的地址。

 

创建Input,点击 Configure Beats Inputs的Create Input 创建。 input相当于是属于Forlinux配置下的一个tag,用来定义来源日志的信息。例如寄快递填写发件人的信息,可以定义多个Input来区分不同的发件人,也就是日志的类型。

 Input中要填写的是name(谁发的)、 Forward to (发给谁)、Type(linux or windows)、Path to Logfile(相当于发件人的详细地址)、Type of input file(ES分析日志中的type字段、便于区分日志类型),save保存

 

 将创建好的tag进行update tags(如下拉框没有信息,先点击create),否则客户端将找不到这个tag。 over 服务器端的配置就完成了。

System/Inputs -在选项中选择beats,点击Lanuch new input

 

 如果是graylog集群,选择Giobal,也就是在每个节点都启动端口,选择node,title起个名字叫做Beats input,端口是12201,其它默认,点击save保存

创建一个api token 客户端配置的时候使用 System/sidecar -> Create or reuse a token for the graylog-sidecar user.

 记住:1emk2bvmnu3chbm0k0vtk8f9ivn1iuv8oevroaobm89e1semvljp 配置客户端需要使用

 需要将配置好的sidecar关联到filebeat ,在System/sidecar → configuration→ create configuration ->输入名字、选择collector(filebeat on Linux)→修改config里面的信息(paths 及 output地址)

 

 配置关联 System/sidecar →Administration->选中chemex下的filebeat->点击右侧configure→ 关联选中filebeat on linux 

 2、nginx客户端配置:(已本机centos7.5 - nginx 为例)

wget https://github.com/Graylog2/collector-sidecar/releases/download/1.1.0/graylog-sidecar-1.1.0-1.x86_64.rpm
yum -y install graylog-sidecar-1.1.0-1.x86_64.rpm
[root@chemex data]# cat /etc/graylog/sidecar/sidecar.yml |grep -v ^#
server_url: "http://10.2.33.101:9000/api/"
server_api_token: "1emk2bvmnu3chbm0k0vtk8f9ivn1iuv8oevroaobm89e1semvljp"
node_id: nginx1
update_interval: 10
tls_skip_verify: false
send_status: true

graylog-sidecar -service install
systemctl start graylog-sidecar
systemctl enable graylog-sidecar

[root@chemex ~]# systemctl status graylog-sidecar
● graylog-sidecar.service - Wrapper service for Graylog controlled collector
Loaded: loaded (/etc/systemd/system/graylog-sidecar.service; enabled; vendor preset: disabled)
Active: active (running) since 三 2022-03-23 09:44:07 CST; 5min ago
Main PID: 7364 (graylog-sidecar)
CGroup: /system.slice/graylog-sidecar.service
├─7364 /usr/bin/graylog-sidecar
└─7392 /usr/share/filebeat/bin/filebeat -c /var/lib/graylog-sidecar/generated/filebeat.conf

3月 23 09:44:07 chemex systemd[1]: Started Wrapper service for Graylog controlled collector.
3月 23 09:44:07 chemex systemd[1]: Starting Wrapper service for Graylog controlled collector...
3月 23 09:44:07 chemex graylog-sidecar[7364]: time="2022-03-23T09:44:07+08:00" level=info msg="Using node-id: chemex"
3月 23 09:44:07 chemex graylog-sidecar[7364]: time="2022-03-23T09:44:07+08:00" level=info msg="No node name was configured, falling back to hostname"
3月 23 09:44:07 chemex graylog-sidecar[7364]: time="2022-03-23T09:44:07+08:00" level=info msg="Starting signal distributor"
3月 23 09:44:17 chemex graylog-sidecar[7364]: time="2022-03-23T09:44:17+08:00" level=info msg="Adding process runner for: filebeat"
3月 23 09:44:17 chemex graylog-sidecar[7364]: time="2022-03-23T09:44:17+08:00" level=info msg="[filebeat] Configuration change detected, rewriting configuration file."
3月 23 09:44:17 chemex graylog-sidecar[7364]: time="2022-03-23T09:44:17+08:00" level=info msg="[filebeat] Starting (exec driver)"



在linux上,graylog-sidcar需要第三方程序作为收集器,有filebeat和nxlog,我们使用filebeat 地址为https://www.elastic.co/cn/downloads/beats
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.1.1-x86_64.rpm
yum -y install filebeat-7.1.1-x86_64.rpm
修改/etc/filebeat/filebeat.yml 文件以下行
21 - type: log
24 enabled: true
28 - /var/log/nginx/*
150 hosts: ["10.2.33.101:9200"]

[root@chemex data]# systemctl start filebeat

[root@chemex data]# systemctl enable filebeat

[root@chemex ~]# systemctl status filebeat
● filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch.
Loaded: loaded (/usr/lib/systemd/system/filebeat.service; disabled; vendor preset: disabled)
Active: active (running) since 二 2022-03-22 17:55:24 CST; 15h ago
Docs: https://www.elastic.co/products/beats/filebeat
Main PID: 8820 (filebeat)
CGroup: /system.slice/filebeat.service
└─8820 /usr/share/filebeat/bin/filebeat -e -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebeat

3月 23 09:47:56 chemex filebeat[8820]: 2022-03-23T09:47:56.528+0800 INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"be...
3月 23 09:48:26 chemex filebeat[8820]: 2022-03-23T09:48:26.528+0800 INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"be...
3月 23 09:48:29 chemex filebeat[8820]: 2022-03-23T09:48:29.248+0800 INFO log/harvester.go:279 File is inactive: /var/log/nginx/access.log. Closing because close_inac... 5m0s reached.
3月 23 09:48:35 chemex filebeat[8820]: 2022-03-23T09:48:35.183+0800 ERROR pipeline/output.go:100 Failed to connect to backoff(elasticsearch(http://10.2.33.101:9200)): Connection m...
3月 23 09:48:35 chemex filebeat[8820]: 2022-03-23T09:48:35.184+0800 INFO pipeline/output.go:93 Attempting to reconnect to backoff(elasticsearch(http://10.2.33.101:92...ect attempt(s)
3月 23 09:48:35 chemex filebeat[8820]: 2022-03-23T09:48:35.184+0800 INFO [publisher] pipeline/retry.go:189 retryer: send unwait-signal to consumer
3月 23 09:48:35 chemex filebeat[8820]: 2022-03-23T09:48:35.184+0800 INFO [publisher] pipeline/retry.go:191 done
3月 23 09:48:35 chemex filebeat[8820]: 2022-03-23T09:48:35.184+0800 INFO [publisher] pipeline/retry.go:166 retryer: send wait signal to consumer
3月 23 09:48:35 chemex filebeat[8820]: 2022-03-23T09:48:35.184+0800 INFO [publisher] pipeline/retry.go:168 done
3月 23 09:48:35 chemex filebeat[8820]: 2022-03-23T09:48:35.185+0800 INFO elasticsearch/client.go:734 Attempting to connect to Elasticsearch version 6.8.5
Hint: Some lines were ellipsized, use -l to show in full.

3、graylog查看收集的日志

 四、配置graylog- dashboards

1、创建一个新的dashboard -> Create new dashboard

 

  五、版本对照

就是它卡了我好长时间。。注意版本关系

 

Logo

权威|前沿|技术|干货|国内首个API全生命周期开发者社区

更多推荐