elk三、docker部署ELK+logtrail插件 显示filebeat抓取的日志(felibeat-output到redis,input到logstash)
前置条件:(创建所需的文件夹)[root@k8s-master ~]# mkdir -p /home/elk/images[root@k8s-master ~]# mkdir -p /home/elk/config[root@k8s-master ~]# mkdir -p /home/elk/elk_data1、下载 kibana插件 logtrailKibana要求插件版本与Ki...
本文所用镜像参考文档
本文章所用的版本都是6.70的版本
- Elasticsearch: 能对大容量的数据进行接近实时的存储,搜索和分析操作。
本项目中主要通过Elasticsearch存储所有获取的日志。 - Logstash:
数据收集引擎,它支持动态的的从各种数据源获取数据,并对数据进行过滤,分析,丰富,统一格式等操作,然后存储到用户指定的位置。 - Kibana: 数据分析与可视化平台,对Elasticsearch存储的数据进行可视化分析,通过表格的形式展现出来。
- Filebeat:
轻量级的开源日志文件数据搜集器。通常在需要采集数据的客户端安装Filebeat,并指定目录与日志格式,Filebeat就能快速收集数据,并发送给logstash进行解析,或是直接发给Elasticsearch存储。 - Redis:NoSQL数据库(key-value),也数据轻型消息队列,不仅可以对高并发日志进行削峰还可以对整个架构进行解耦
前置条件:
(创建所需的文件夹)
[root@k8s-master ~]# mkdir -p /home/elk/images
[root@k8s-master ~]# mkdir -p /home/elk/config
[root@k8s-master ~]# mkdir -p /home/elk/elk_data
1、下载 kibana插件 logtrail
Kibana要求插件版本与Kibana版本完全匹配。如果您找不到Kibana版本的logtrail插件版本,请按照此处的说明查找更新logtrail插件存档中的Kibana版本。
[root@k8s-master images]# wget https://github.com/sivasamyk/logtrail/releases/download/v0.1.31/logtrail-6.7.0-0.1.31.zip
[root@k8s-master images]# wget https://github.com/sivasamyk/logtrail/releases/download/v0.1.31/logtrail-7.4.0-0.1.31.zip
kibana插件安装官方教程
elk-docker插件安装官方教程
2、创建扩展 ELK镜像-添加kibana的sivasamyk/logtrail 插件:
编写Dockerfile(方法一)
到新建的images下运行如下命令
[root@k8s-master ~]#
[root@k8s-master images]# cat > Dockerfile << eric
# 使用elk套件镜像 https://hub.docker.com/r/sebp/elk
FROM sebp/elk:670
#ADD 02-beats-input.conf /etc/logstash/conf.d/02-beats-input.conf
#ADD 30-output.conf /etc/logstash/conf.d/30-output.conf
# 添加kibana的sivasamyk/logtrail 插件
ADD ./logtrail-6.7.0-0.1.31.zip /opt/kibana/plugin/logtrail-6.7.0-0.1.31.zip
# 指定 kibana工作目录
WORKDIR \${KIBANA_HOME}
# 在容器内部为kibana安装插件
RUN gosu kibana bin/kibana-plugin install file:///opt/kibana/plugin/logtrail-6.7.0-0.1.31.zip
eric
[root@k8s-master images]#
2.1、扩展 ELK镜像-添加kibana的sivasamyk/logtrail 插件
编写Dockerfile(方法二)
直接创建vim Dockerfile
# 使用elk套件镜像 https://hub.docker.com/r/sebp/elk
FROM sebp/elk:670
#ADD 02-beats-input.conf /etc/logstash/conf.d/02-beats-input.conf
#ADD 30-output.conf /etc/logstash/conf.d/30-output.conf
# 添加kibana的sivasamyk/logtrail 插件
ADD ./logtrail-6.7.0-0.1.31.zip /opt/kibana/plugin/logtrail-6.7.0-0.1.31.zip
# 指定 kibana工作目录
WORKDIR ${KIBANA_HOME}
# 在容器内部为kibana安装插件
RUN gosu kibana bin/kibana-plugin install file:///opt/kibana/plugin/logtrail-6.7.0-0.1.31.zip
3、运行构建 ELK镜像
[root@k8s-master images]# pwd
/home/elk/images
[root@k8s-master images]# ll
-rw-r--r--. 1 root root 469 7月 1 10:20 Dockerfile
-rw-r--r--. 1 root root 3878447 4月 28 13:45 logtrail-6.7.0-0.1.31.zip
[root@k8s-master images]#
[root@k8s-master images]# docker build -t k8s.dev-share.top/elk:v1.0 .
4、编写 logtrail的配置文件 logtrail.json(方法一)【方法二如2.1】
[root@k8s-master config]# pwd
/home/elk/config
[root@k8s-master config]# cat > logtrail.json << leo
{
"version": 2,
"index_patterns": [{
"es": {
"default_index": "paas-st-*",
"allow_url_parameter": false,
"timezone": "UTC"
},
"tail_interval_in_seconds": 10,
"es_index_time_offset_in_seconds": 0,
"display_timezone": "local",
"display_timestamp_format": "YYYY年MM月DD日 HH:mm:ss",
"max_buckets": 500,
"nested_objects": false,
"default_time_range_in_days": 0,
"max_hosts": 100,
"max_events_to_keep_in_viewer": 5000,
"default_search": "",
"fields": {
"mapping": {
"timestamp": "@timestamp",
"program": "tags",
"hostname": "attrs.service",
"message": "log"
},
"message_format": "{{{log}}} | {{{marker}}}",
"keyword_suffix": "keyword"
}
},{
"es": {
"default_index": "paas-uat-*",
"allow_url_parameter": false,
"timezone": "UTC"
},
"tail_interval_in_seconds": 10,
"es_index_time_offset_in_seconds": 0,
"display_timezone": "local",
"display_timestamp_format": "YYYY年MM月DD日 HH:mm:ss",
"max_buckets": 500,
"nested_objects": false,
"default_time_range_in_days": 0,
"max_hosts": 100,
"max_events_to_keep_in_viewer": 5000,
"default_search": "",
"fields": {
"mapping": {
"timestamp": "@timestamp",
"program": "tags",
"hostname": "attrs.service",
"message": "log"
},
"message_format": "{{{log}}} | {{{marker}}}",
"keyword_suffix": "keyword"
}
}]
}
leo
[root@k8s-master config]#
编写 logtrail的配置文件 logtrail.json(实际配置)
{
"version": 2,
"index_patterns": [{
"es": {
"default_index": "st-*",
"allow_url_parameter": false,
"timezone": "UTC"
},
"tail_interval_in_seconds": 10,
"es_index_time_offset_in_seconds": 0,
"display_timezone": "local",
"display_timestamp_format": "YYYY年MM月DD日 HH:mm:ss",
"max_buckets": 500,
"nested_objects": false,
"default_time_range_in_days": 0,
"max_hosts": 100,
"max_events_to_keep_in_viewer": 5000,
"default_search": "",
"fields": {
"mapping": {
"timestamp": "@timestamp",
"program": "tags",
"hostname": "attrs.service",
"message": "log"
},
"message_format": "{{{log}}} | {{{marker}}}",
"keyword_suffix": "keyword"
}
},{
"es": {
"default_index": "uat-*",
"allow_url_parameter": false,
"timezone": "UTC"
},
"tail_interval_in_seconds": 10,
"es_index_time_offset_in_seconds": 0,
"display_timezone": "local",
"display_timestamp_format": "YYYY年MM月DD日 HH:mm:ss",
"max_buckets": 500,
"nested_objects": false,
"default_time_range_in_days": 0,
"max_hosts": 100,
"max_events_to_keep_in_viewer": 5000,
"default_search": "",
"fields": {
"mapping": {
"timestamp": "@timestamp",
"program": "tags",
"hostname": "kubernetes.labels.name",
"message": "message"
},
"hostname_format": "{{{kubernetes.namespace}}} | {{{hostname}}}",
"message_format": "{{{kubernetes.namespace}}} | {{{message}}}",
"keyword_suffix": "keyword"
}
},{
"es": {
"default_index": "prod-*",
"allow_url_parameter": false,
"timezone": "UTC"
},
"tail_interval_in_seconds": 10,
"es_index_time_offset_in_seconds": 0,
"display_timezone": "local",
"display_timestamp_format": "YYYY年MM月DD日 HH:mm:ss",
"max_buckets": 500,
"nested_objects": false,
"default_time_range_in_days": 0,
"max_hosts": 100,
"max_events_to_keep_in_viewer": 5000,
"default_search": "",
"fields": {
"mapping": {
"timestamp": "@timestamp",
"program": "tags",
"hostname": "kubernetes.labels.name",
"message": "message"
},
"hostname_format": "{{{kubernetes.namespace}}} | {{{hostname}}}",
"message_format": "{{{kubernetes.namespace}}} | {{{message}}}",
"keyword_suffix": "keyword"
}
}]
}
5、配置logstash的input(02-beats-input.conf 从redis拉取日志数据的配置文件)(方法一)【方法二同2.1】
[root@k8s-master config]# cat > 02-beats-input.conf << leo
# Beats -> Logstash -> Elasticsearch pipeline.
input {
redis {
host => "Redis的IP地址"
port => "6379"
password => "Redis的密码"
key => "Redis的key"
data_type => "list"
db => "4"
}
}
leo
[root@k8s-master config]#
6、配置logstash的output到elasticsearch(30-output.conf 向elasticsearch推送日志数据的配置文件)
[root@k8s-master config]# cat > 30-output.conf << leo
filter {
grok {
match => { "log" => "%{LOGLEVEL:level}" }
}
if [level] == "DEBUG" {
drop {}
}
if [level] == "WARN" {
drop {}
}
if "_grokparsefailure" in [tags] {
drop {}
}
}
output {
elasticsearch {
hosts => ["192.168.180.6:9200"]
index => "%{tags}-%{+YYYY.MM.dd}"
}
}
leo
[root@k8s-master config]#
tags是filebeat.yml中配置的tags
log是filebeat配置的类型(参考filebeat.yml)
7、查看所有创建好的 文件
三个文件一个不能少
[root@k8s-master config]# ll
total 12
-rw-r--r-- 1 root root 259 Jul 1 15:42 02-beats-input.conf
-rw-r--r-- 1 root root 219 Jul 1 15:42 30-output.conf
-rw-r--r-- 1 root root 1177 Jul 1 15:42 logtrail.json
[root@k8s-master config]#
8、新建docker-compose ,使用docker-compose 部署镜像(方法一)【方法二同2.1】
docker-compose所引用的images(k8s.dev-share.top/elk:v1.0)是我们build镜像所取的名字【请看3下的内容】
[root@k8s-master elk]# pwd
/home/elk
[root@k8s-master elk]# cat > docker-compose.yml << leo
version: '3'
services:
elk:
image: k8s.dev-share.top/elk:v1.0
build: images
container_name: elk-leo
restart: always
ports:
- "5601:5601"
- "9200:9200"
- "5044:5044"
volumes:
# ES的日志记录文件目录
- ./elk_data:/var/lib/elasticsearch
# 修改logstash从redis拉取日志数据的配置文件
- ./config/02-beats-input.conf:/etc/logstash/conf.d/02-beats-input.conf
# 修改logtrail向elasticsearch推送日志数据的配置文件
- ./config/30-output.conf:/etc/logstash/conf.d/30-output.conf
# 修改logtrail插件配置文件
- ./config/logtrail.json:/opt/kibana/plugins/logtrail/logtrail.json
# 修改elasticsearch使用内存
#- ./config/jvm.options:/etc/elasticsearch/jvm.options
- ./config/kibana.yml:/opt/kibana/config/kibana.yml
-
- ./logs/kibana:/var/log/kibana
- ./logs/elasticsearch:/var/log/elasticsearch
- ./logs/logstash:/var/log/logstash
#- ./config/12-some-filter.conf:/etc/logstash/conf.d/12-some-filter.conf
environment:
# es的堆使用内存-Xms8g -Xmx8g
- ES_HEAP_SIZE=8g
# logstash堆使用的内存
- LS_HEAP_SIZE=1g
# 等待es启动的时间
- ES_CONNECT_RETRY=60
# Kibana启动的时候,指定NodeJs使用更多的内存
- NODE_OPTIONS="--max-old-space-size=4096"
# es的堆使用内存
#- ES_JAVA_OPTS="-Xms6g -Xmx10g"
#- MAX_MAP_COUNT=262144
#deploy:
# resources: # 资源限制
# reservations: # 设置为容器预留的系统资源(随时可用)
# memory: 4096M
# es-head_5:
# #container_name: es-head_5
# image: mobz/elasticsearch-head:5
# container_name: es-head_5
# network_mode: "host"
# #command: /opt/kibana/bin/kibana-plugin install file:///opt/kibana/plugin/logtrail-6.7.0-0.1.31.zielasticsearch-head:
# volumes:
# # 修改挂载位置
# - /home/elk/head/Gruntfile.js:/usr/src/app/Gruntfile.js
# - /home/elk/head/app.js:/usr/src/app/_site/app.js
# ports:
# - "9100:9100"
# #links:
# # - elasticsearch-central:elasticsearch
leo
[root@k8s-master elk]#
9、部署,运行刚刚创建的docker-compose.yml文件
[root@k8s-master elk]# ll
total 16
drwxr-xr-x 2 root root 4096 Jul 1 15:42 config
-rw-r--r-- 1 root root 681 Jul 1 15:44 docker-compose.yml
drwxr-xr-x 2 root root 4096 Jul 1 14:57 elk_data
drwxr-xr-x 2 root root 4096 Jul 1 15:13 images
[root@k8s-master elk]#
[root@k8s-master elk]# docker-compose up -d
Creating network "elk_default" with the default driver
Creating elk_elk_1 ... done
[root@k8s-master elk]#
如果kibana无法访问可以docker exec -it 进入容器内看哪个服务挂了,启动即可
root@abd2101820ef:/etc/init.d# pwd
/etc/init.d
root@abd2101820ef:/etc/init.d# ps -ef
root@abd2101820ef:/etc/init.d# ./kibana start
注:logtrail.json 配置文件详解
{
"version": 2,
"index_patterns": [{
"es": {
"default_index": "*",
"allow_url_parameter": false,
"timezone": "UTC"
},
"tail_interval_in_seconds": 10,
"es_index_time_offset_in_seconds": 0,
"display_timezone": "local",
"display_timestamp_format": "YYYY年MM月DD日 HH:mm:ss",
"max_buckets": 500,
"nested_objects": false,
"default_time_range_in_days": 0,
"max_hosts": 100,
"max_events_to_keep_in_viewer": 5000,
"default_search": "",
"fields": {
// mapping的key是logtrail控制台显示的属性,mapping的value是对应kibana显示filebeat的属性
// 这个映射的意思就是,告诉logtrail我们要把kibana的哪些日志拿来展示
// 这个配置展示的最终的结果是
// 2019年07月01日 14:11:11 ["ST-k8s-master"]: 2019-07-01 14:11:07.664 INFO ......
// mapping控制的是消息的前半部分 : message_format控制的是消息的后半部分
"mapping": {
"timestamp": "@timestamp",
"hostname": "attrs.service", // attrs.service 是自己定义的 kibana属性(其实是filebeat获取日志中labels的属性所转换的)
"program": "tags", // tags 是kibana的属性
"message": "log"
},
// logtrail 日志的格式
"message_format": "{{{log}}}",
"keyword_suffix": "keyword"
},
"color_mapping": {
"field": "log_level",
"mapping": {
"ERROR": "#FF0000",
"WARN": "#FFEF96",
"DEBUG": "#B5E7A0",
"TRACE": "#CFE0E8",
"INFO": "#339999"
}
}
}]
}
二、配置head
一定要在镜像中进行配置如下的两个文件
/usr/src/app/Gruntfile.js
/usr/src/app/_site/app.js
1、Gruntfile.js修改以下片段,
connect: {
server: {
options: {
/* 默认监控:127.0.0.1,修改为:0.0.0.0 */
hostname: '0.0.0.0',
port: 9100,
base: '.',
keepalive: true
}
}
2、app.js修改以下代码片段:
/* 修改localhost为elasticsearch集群地址,Docker部署中,一般是elasticsearch宿主机地址 */
this.base_uri = this.config.base_uri || this.prefs.get("app-base_uri") || "http://localhost:9200";
最终配置:docker-compose.yml
version: '3'
services:
elk:
image: sinoeyes.io/elk:v1.0
build: ./images
container_name: elk-paas
restart: always
ports:
- "5601:5601"
- "9200:9200"
- "5044:5044"
volumes:
#- ./data:/var/lib/elasticsearch
- ./elk_data:/var/lib/elasticsearch
- ./config/02-beats-input.conf:/etc/logstash/conf.d/02-beats-input.conf
- ./config/30-output.conf:/etc/logstash/conf.d/30-output.conf
- ./config/kibana.yml:/opt/kibana/config/kibana.yml
- ./config/logstash.yml:/opt/logstash/config/logstash.yml
#- ./config/elasticsearch.yml:/etc/elasticsearch/elasticsearch.yml
#- ./config/elasticsearch.yml:/etc/elasticsearch/elasticsearch.yml
- ./config/logtrail.json:/opt/kibana/plugins/logtrail/logtrail.json
#- ./config/jvm.options:/opt/elasticsearch/config/jvm.options
- ./logs/kibana:/var/log/kibana
- ./logs/elasticsearch:/var/log/elasticsearch
- ./logs/logstash:/var/log/logstash
#- ./config/12-some-filter.conf:/etc/logstash/conf.d/12-some-filter.conf
environment:
- ES_HEAP_SIZE=8g
- LS_HEAP_SIZE=1g
- ES_CONNECT_RETRY=120
- NODE_OPTIONS="--max-old-space-size=2000"
#- ES_JAVA_OPTS="-Xms10g -Xmx10g"
#- MAX_OPEN_FILES=65536
#- MAX_MAP_COUNT=262144
# es-head_5:
# #container_name: es-head_5
# image: mobz/elasticsearch-head:5
# container_name: es-head_5
# network_mode: "host"
# #command: /opt/kibana/bin/kibana-plugin install file:///opt/kibana/plugin/logtrail-6.7.0-0.1.31.zielasticsearch-head:
# volumes:
# # 修改挂载位置
# - /home/elk/head/Gruntfile.js:/usr/src/app/Gruntfile.js
# - /home/elk/head/app.js:/usr/src/app/_site/app.js
# ports:
# - "9100:9100"
# #links:
# # - elasticsearch-central:elasticsearch
启动失败按照以下修改max_map_count :
max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
elasticsearch启动时遇到的错误
问题翻译过来就是:elasticsearch用户拥有的内存权限太小,至少需要262144
解决:
切换到root用户
执行命令:
sysctl -w vm.max_map_count=262144
查看结果:
sysctl -a|grep vm.max_map_count
显示:
vm.max_map_count = 262144
上述方法修改之后,如果重启虚拟机将失效,所以:
解决办法:
在 /etc/sysctl.conf文件最后添加一行
vm.max_map_count=262144
即可永久修改
max_map_count文件包含限制一个进程可以拥有的VMA(虚拟内存区域)的数量。虚拟内存区域是一个连续的虚拟地址空间区域。在进程的生命周期中,每当程序尝试在内存中映射文件,链接到共享内存段,或者分配堆空间的时候,这些区域将被创建。调优这个值将限制进程可拥有VMA的数量。限制一个进程拥有VMA的总数可能导致应用程序出错,因为当进程达到了VMA上线但又只能释放少量的内存给其他的内核进程使用时,操作系统会抛出内存不足的错误。如果你的操作系统在NORMAL区域仅占用少量的内存,那么调低这个值可以帮助释放内存给内核用。
三、ES数据定期删除
如果不删除ES数据,将会导致ES存储的数据越来越多,磁盘满了之后将无法写入新的数据。这时可以使用脚本定时删除过期数据。
方法一、
#/bin/bash
#es-index-clear
#只保留15天内的日志索引(删除15天前一天的日志)
LAST_DATA=`date -d "-15 days" "+%Y.%m.%d"`
#删除上个月份所有的索引
curl -XDELETE 'http://ip:port/*-'${LAST_DATA}'*'
可以视个人情况调整保留的天数,这里的ip和port同样设置为不存储数据的那台机器。该脚本只需要在ES中一台机器定时运行即可。
crontab -e添加定时任务:
0 1 * * * /search/odin/elasticsearch/scripts/es-index-clear.sh
每天的凌晨一点清除索引。
注意:保证crond服务是启动的
[root@server scripts]# service crond status
方法二、
#!/bin/bash
#删除ELK30天前的日志
DATE=`date -d "30 days ago" +%Y.%m.%d`
curl -s -XGET http://127.0.0.1:9200/_cat/indices?v| grep $DATE | awk -F '[ ]+' '{print $3}' >/tmp/elk.log
for elk in `cat /tmp/elk.log`
do
curl -XDELETE "http://127.0.0.1:9200/$elk"
done
加入到定时任务
# crontab -e
#每天凌晨1点定时清理elk索引
00 01 * * * bash /server/scripts/elk.sh &>/dev/null
四、本文提供配置文件
1、默认情况下,如果处理的字符中含有\t\n等字符,是不生效的,我们需要开启logstash的字符转义功能,如下:
修改logstash的config/logstash.yml,找到config.support_escapes,去掉之前的注释,将值改为true,默认是false
vim logstash.yml
# Settings file in YAML
#
# Settings can be specified either in hierarchical form, e.g.:
#
# pipeline:
# batch:
# size: 125
# delay: 5
#
# Or as flat keys:
#
# pipeline.batch.size: 125
# pipeline.batch.delay: 5
#
# ------------ Node identity ------------
#
# Use a descriptive name for the node:
#
# node.name: test
#
# If omitted the node name will default to the machine's host name
#
# ------------ Data path ------------------
#
# Which directory should be used by logstash and its plugins
# for any persistent needs. Defaults to LOGSTASH_HOME/data
#
# path.data:
#
# ------------ Pipeline Settings --------------
#
# The ID of the pipeline.
#
# pipeline.id: main
#
# Set the number of workers that will, in parallel, execute the filters+outputs
# stage of the pipeline.
#
# This defaults to the number of the host's CPU cores.
#
# pipeline.workers: 2
pipeline.workers: 4
#
# How many events to retrieve from inputs before sending to filters+workers
#
# pipeline.batch.size: 125
pipeline.batch.size: 3000
#
# How long to wait in milliseconds while polling for the next event
# before dispatching an undersized batch to filters+outputs
#
# pipeline.batch.delay: 50
pipeline.batch.delay: 50
#
# Force Logstash to exit during shutdown even if there are still inflight
# events in memory. By default, logstash will refuse to quit until all
# received events have been pushed to the outputs.
#
# WARNING: enabling this can lead to data loss during shutdown
#
# pipeline.unsafe_shutdown: false
#
# ------------ Pipeline Configuration Settings --------------
#
# Where to fetch the pipeline configuration for the main pipeline
#
# path.config:
#
# Pipeline configuration string for the main pipeline
#
# config.string:
#
# At startup, test if the configuration is valid and exit (dry run)
#
# config.test_and_exit: false
#
# Periodically check if the configuration has changed and reload the pipeline
# This can also be triggered manually through the SIGHUP signal
#
# config.reload.automatic: false
#
# How often to check if the pipeline configuration has changed (in seconds)
#
# config.reload.interval: 3s
#
# Show fully compiled configuration as debug log message
# NOTE: --log.level must be 'debug'
#
# config.debug: false
#
# When enabled, process escaped characters such as \n and \" in strings in the
# pipeline configuration files.
#
# config.support_escapes: false
config.support_escapes: true
#
# ------------ Module Settings ---------------
# Define modules here. Modules definitions must be defined as an array.
# The simple way to see this is to prepend each `name` with a `-`, and keep
# all associated variables under the `name` they are associated with, and
# above the next, like this:
#
# modules:
# - name: MODULE_NAME
# var.PLUGINTYPE1.PLUGINNAME1.KEY1: VALUE
# var.PLUGINTYPE1.PLUGINNAME1.KEY2: VALUE
# var.PLUGINTYPE2.PLUGINNAME1.KEY1: VALUE
# var.PLUGINTYPE3.PLUGINNAME3.KEY1: VALUE
#
# Module variable names must be in the format of
#
# var.PLUGIN_TYPE.PLUGIN_NAME.KEY
#
# modules:
#
# ------------ Cloud Settings ---------------
# Define Elastic Cloud settings here.
# Format of cloud.id is a base64 value e.g. dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRub3RhcmVhbCRpZGVudGlmaWVy
# and it may have an label prefix e.g. staging:dXMtZ...
# This will overwrite 'var.elasticsearch.hosts' and 'var.kibana.host'
# cloud.id: <identifier>
#
# Format of cloud.auth is: <user>:<pass>
# This is optional
# If supplied this will overwrite 'var.elasticsearch.username' and 'var.elasticsearch.password'
# If supplied this will overwrite 'var.kibana.username' and 'var.kibana.password'
# cloud.auth: elastic:<password>
#
# ------------ Queuing Settings --------------
#
# Internal queuing model, "memory" for legacy in-memory based queuing and
# "persisted" for disk-based acked queueing. Defaults is memory
#
# queue.type: memory
#
# If using queue.type: persisted, the directory path where the data files will be stored.
# Default is path.data/queue
#
# path.queue:
#
# If using queue.type: persisted, the page data files size. The queue data consists of
# append-only data files separated into pages. Default is 64mb
#
# queue.page_capacity: 64mb
#
# If using queue.type: persisted, the maximum number of unread events in the queue.
# Default is 0 (unlimited)
#
# queue.max_events: 0
#
# If using queue.type: persisted, the total capacity of the queue in number of bytes.
# If you would like more unacked events to be buffered in Logstash, you can increase the
# capacity using this setting. Please make sure your disk drive has capacity greater than
# the size specified here. If both max_bytes and max_events are specified, Logstash will pick
# whichever criteria is reached first
# Default is 1024mb or 1gb
#
# queue.max_bytes: 1024mb
#
# If using queue.type: persisted, the maximum number of acked events before forcing a checkpoint
# Default is 1024, 0 for unlimited
#
# queue.checkpoint.acks: 1024
#
# If using queue.type: persisted, the maximum number of written events before forcing a checkpoint
# Default is 1024, 0 for unlimited
#
# queue.checkpoint.writes: 1024
#
# If using queue.type: persisted, the interval in milliseconds when a checkpoint is forced on the head page
# Default is 1000, 0 for no periodic checkpoint.
#
# queue.checkpoint.interval: 1000
#
# ------------ Dead-Letter Queue Settings --------------
# Flag to turn on dead-letter queue.
#
# dead_letter_queue.enable: false
# If using dead_letter_queue.enable: true, the maximum size of each dead letter queue. Entries
# will be dropped if they would increase the size of the dead letter queue beyond this setting.
# Default is 1024mb
# dead_letter_queue.max_bytes: 1024mb
# If using dead_letter_queue.enable: true, the directory path where the data files will be stored.
# Default is path.data/dead_letter_queue
#
# path.dead_letter_queue:
#
# ------------ Metrics Settings --------------
#
# Bind address for the metrics REST endpoint
#
# http.host: "127.0.0.1"
#
# Bind port for the metrics REST endpoint, this option also accept a range
# (9600-9700) and logstash will pick up the first available ports.
#
# http.port: 9600-9700
#
# ------------ Debugging Settings --------------
#
# Options for log.level:
# * fatal
# * error
# * warn
# * info (default)
# * debug
# * trace
#
# log.level: info
# path.logs:
#
# ------------ Other Settings --------------
#
# Where to find custom plugins
# path.plugins: []
#
# ------------ X-Pack Settings (not applicable for OSS build)--------------
#
# X-Pack Monitoring
# https://www.elastic.co/guide/en/logstash/current/monitoring-logstash.html
#xpack.monitoring.enabled: false
#xpack.monitoring.elasticsearch.username: logstash_system
#xpack.monitoring.elasticsearch.password: password
#xpack.monitoring.elasticsearch.hosts: ["https://es1:9200", "https://es2:9200"]
#xpack.monitoring.elasticsearch.ssl.certificate_authority: [ "/path/to/ca.crt" ]
#xpack.monitoring.elasticsearch.ssl.truststore.path: path/to/file
#xpack.monitoring.elasticsearch.ssl.truststore.password: password
#xpack.monitoring.elasticsearch.ssl.keystore.path: /path/to/file
#xpack.monitoring.elasticsearch.ssl.keystore.password: password
#xpack.monitoring.elasticsearch.ssl.verification_mode: certificate
#xpack.monitoring.elasticsearch.sniffing: false
#xpack.monitoring.collection.interval: 10s
#xpack.monitoring.collection.pipeline.details.enabled: true
#
# X-Pack Management
# https://www.elastic.co/guide/en/logstash/current/logstash-centralized-pipeline-management.html
#xpack.management.enabled: false
#xpack.management.pipeline.id: ["main", "apache_logs"]
#xpack.management.elasticsearch.username: logstash_admin_user
#xpack.management.elasticsearch.password: password
#xpack.management.elasticsearch.hosts: ["https://es1:9200", "https://es2:9200"]
#xpack.management.elasticsearch.ssl.certificate_authority: [ "/path/to/ca.crt" ]
#xpack.management.elasticsearch.ssl.truststore.path: /path/to/file
#xpack.management.elasticsearch.ssl.truststore.password: password
#xpack.management.elasticsearch.ssl.keystore.path: /path/to/file
#xpack.management.elasticsearch.ssl.keystore.password: password
#xpack.management.elasticsearch.ssl.verification_mode: certificate
#xpack.management.elasticsearch.sniffing: false
#xpack.management.logstash.poll_interval: 5s
2、如需汉化
添加到kibana.yml首行
i18n.locale: "zh-CN"
kibana总是被杀死修改server.maxPayloadBytes大小
如果还是死那就写个脚本加到容器里定时调用以下脚本(是kibana版本过老的原因)
或是“ps -ef|grep 5601”条件
if [ -z "`/sbin/fuser -n tcp 5601`" ];then
nohup /***/etc/init.d/kibana start &
fi
vim kibana.yml
i18n.locale: "zh-CN"
# Default Kibana 5 file from https://github.com/elastic/kibana/blob/master/config/kibana.yml
#
# Kibana is served by a back end server. This setting specifies the port to use.
#server.port: 5601
# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "0.0.0.0"
#server.host: "192.168.180.6"
# Enables you to specify a path to mount Kibana at if you are running behind a proxy. This only affects
# the URLs generated by Kibana, your proxy is expected to remove the basePath value before forwarding requests
# to Kibana. This setting cannot end in a slash.
#server.basePath: ""
# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576
# 传入服务器请求的最大有效负载大小(以字节为单位)。50M
server.maxPayloadBytes: 52428800
# The Kibana server's name. This is used for display purposes.
#server.name: "your-hostname"
# The URL of the Elasticsearch instance to use for all your queries.
elasticsearch.url: "http://localhost:9200"
#elasticsearch.url: "http://192.168.180.6:9200"
# When this setting’s value is true Kibana uses the hostname specified in the server.host
# setting. When the value of this setting is false, Kibana uses the hostname of the host
# that connects to this Kibana instance.
#elasticsearch.preserveHost: true
# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn’t already exist.
#kibana.index: ".kibana"
# The default application to load.
#kibana.defaultAppId: "discover"
# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "user"
#elasticsearch.password: "pass"
# Paths to the PEM-format SSL certificate and SSL key files, respectively. These
# files enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.cert: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key
# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files validate that your Elasticsearch backend uses the same key files.
#elasticsearch.ssl.cert: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key
# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.ca: /path/to/your/CA.pem
# To disregard the validity of SSL certificates, change this setting’s value to false.
#elasticsearch.ssl.verify: true
# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500
elasticsearch.pingTimeout: 180000
# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 80000
elasticsearch.requestTimeout: 180000
# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]
# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}
# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 0
# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
#elasticsearch.startupTimeout: 50000
# Specifies the path where Kibana creates the process ID file.
#pid.file: /var/run/kibana.pid
# Enables you specify a file where Kibana stores log output.
#logging.dest: stdout
# Set the value of this setting to true to suppress all logging output.
#logging.silent: false
# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false
# Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: false
# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000
cat elasticsearch.yml
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: 192.168.180.6
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes:
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
action.destructive_requires_name: true
# 禁止自动创建索引
action.auto_create_index: false
network.host: 0.0.0.0
http.cors.enabled: true
http.cors.allow-origin: "*"
#path.logs: /opt/elasticsearch/logs
cat jvm.options
## JVM configuration
################################################################
## IMPORTANT: JVM heap size
################################################################
##
## You should always set the min and max JVM heap
## size to the same value. For example, to set
## the heap to 4 GB, set:
##
## -Xms4g
## -Xmx4g
##
## See https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html
## for more information
##
################################################################
# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space
-Xms6g
-Xmx6g
################################################################
## Expert settings
################################################################
##
## All settings below this section are considered
## expert settings. Don't tamper with them unless
## you understand what you are doing
##
################################################################
## GC configuration
-XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly
## G1GC Configuration
# NOTE: G1GC is only supported on JDK version 10 or later.
# To use G1GC uncomment the lines below.
# 10-:-XX:-UseConcMarkSweepGC
# 10-:-XX:-UseCMSInitiatingOccupancyOnly
# 10-:-XX:+UseG1GC
# 10-:-XX:InitiatingHeapOccupancyPercent=75
## DNS cache policy
# cache ttl in seconds for positive DNS lookups noting that this overrides the
# JDK security property networkaddress.cache.ttl; set to -1 to cache forever
-Des.networkaddress.cache.ttl=60
# cache ttl in seconds for negative DNS lookups noting that this overrides the
# JDK security property networkaddress.cache.negative ttl; set to -1 to cache
# forever
-Des.networkaddress.cache.negative.ttl=10
## optimizations
# pre-touch memory pages used by the JVM during initialization
-XX:+AlwaysPreTouch
## basic
# explicitly set the stack size
-Xss1m
# set to headless, just in case
-Djava.awt.headless=true
# ensure UTF-8 encoding by default (e.g. filenames)
-Dfile.encoding=UTF-8
# use our provided JNA always versus the system one
-Djna.nosys=true
# turn off a JDK optimization that throws away stack traces for common
# exceptions because stack traces are important for debugging
-XX:-OmitStackTraceInFastThrow
# flags to configure Netty
-Dio.netty.noUnsafe=true
-Dio.netty.noKeySetOptimization=true
-Dio.netty.recycler.maxCapacityPerThread=0
# log4j 2
-Dlog4j.shutdownHookEnabled=false
-Dlog4j2.disable.jmx=true
-Djava.io.tmpdir=${ES_TMPDIR}
## heap dumps
# generate a heap dump when an allocation from the Java heap fails
# heap dumps are created in the working directory of the JVM
-XX:+HeapDumpOnOutOfMemoryError
# specify an alternative path for heap dumps; ensure the directory exists and
# has sufficient space
-XX:HeapDumpPath=data
# specify an alternative path for JVM fatal error logs
-XX:ErrorFile=logs/hs_err_pid%p.log
## JDK 8 GC logging
8:-XX:+PrintGCDetails
8:-XX:+PrintGCDateStamps
8:-XX:+PrintTenuringDistribution
8:-XX:+PrintGCApplicationStoppedTime
8:-Xloggc:logs/gc.log
8:-XX:+UseGCLogFileRotation
8:-XX:NumberOfGCLogFiles=32
8:-XX:GCLogFileSize=64m
# JDK 9+ GC logging
9-:-Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m
# due to internationalization enhancements in JDK 9 Elasticsearch need to set the provider to COMPAT otherwise
# time/date parsing will break in an incompatible way for some date patterns and locals
9-:-Djava.locale.providers=COMPAT
# temporary workaround for C2 bug with JDK 10 on hardware with AVX-512
10-:-XX:UseAVX=2
注释:配置好没有日志的原因是没有满足以下的条件(标题已经说明)
(已经配置好filebeat且filebeat抓取的日志数据推到了redis中,要不以下配置好是没有日志数据的
配置filebeat(抓取k8s,抓取docker),查看所配置redis中抓取的日志)
修改基础配置:
1、操作系统对mmap计数的限制,设置为至少262144
sysctl -w vm.max_map_count=262144
2、生效查看
sysctl vm.max_map_count
要永久设置此值,请更新中的vm.max_map_count设置 /etc/sysctl.conf
[root@test~]# cat /etc/sysctl.conf
vm.max_map_count = 262144
修改文件打开数
# ulimit -n 102400 && ulimit -n && vim /etc/security/limits.conf
添加如下两列
* soft nofile 102400
* hard nofile 102400
更多推荐
所有评论(0)