今天研究了一下elasticsearch的集群,便于以后项目中使用。这里总结一下我的实践及测试过程。

    elasticsearch原本就是支持集群的,所以它的集群只需要修改es自己的配置文件就可以实现集群。

一、安装elasticsearch

1、下载需要安装的es安装包,我一般下载的是es的zip安装包(安装在本地windows系统),若是安装在linux操作系统下,建议下载tar.gz版本。

2、将安装报解压到本地,某个特定的目录下。

3、修改es的配置文件,进入es解压目录下的config文件夹,可以看到其中包含两个配置文件:


其中elasticsearch.yml配置文件,是我们需要修改,以达到实现集群的目的。

二、elasticsearch配置文件中的几个重要参数:

1、cluster.name表示集群的名称,是为了自动识别集群而配置的,如果需要运行多个集群机器在同样的网络中,需要保证全部集群使用唯一的集群名称。

例:cluster.name: elasticsearchForTest  

2、node.name可以保证在启动时,动态的加载集群节点,因此你可以放心的手动去配置它。它的作用是与集群中的其他节点进行区别,所以你可以规定一个特殊的名字来进行区分。

例:node.name: "test01"

3、node.master表示允许此节点有资格成为主节点

4、node.data表示允许此节点存储数据

注:这里node.master和node.data可以有四种搭配方式,分别表示不同的含义

node.master: true
node.data: true

上面配置方式表示,此节点可以有资格被选举为一个主节点(master),也可以存储数据,这个配置属于默认配置,在不修改配置文件的情况下,es的此节点默认为这个状态。

node.master: true
node.data: false

上面配置方式表示,此节点可以有资格被选举为一个主节点(master),但是不存储任何数据,它将成为集群中的一个协调者

node.master: false
node.data: true

上面配置方式表示,此节点没有资格被选举为主节点(master),但是它仅仅用来存储数据,它将会成为集群中的重负荷节点。

node.master: false
node.data: false

上面配置方式表示,此节点没有资格被选举为主节点(master),也不存储数据,但是它会扮演负载均衡器的作用,可以从节点去获取数据,聚合数据。

5、http.port 设置客户端的http监听端口,若是存在多台集群机器,可以不去做设置,默认值为9200,若是在一台机器上启动了多个es,则需要修改此端口号,防止端口冲突和被占用。

6、transport.tcp.port 设置节点与节点之间相互通信的客户端端口,默认值为9300,同上,同一台机器需要修改此端口号

7、discovery.zen.ping.timeout 设置发现其他节点并得到响应的等待时间,在网络慢的情况下,设置更高的等待值。

例:discovery.zen.ping.timeout: 60s

8、discovery.zen.ping.unicast.hosts 配置一个初始的master(主节点)列表,也就是这个列表中的服务器是可能会被选举为主节点的服务器与端口号,端口号使用transport.tcp.port这个端口号,表示节点与节点之间的通信。它的作用是当有新的节点或者主节点被加入的时候,可以快速被发现。

三、我实在自己本地启动了三个es服务,其中一个只作为负载均衡,另外两个既有可能被选举为主节点,也可以存储数据。因为我想解决的问题是,万一其中一台es服务宕机,另一台es服务可以继续正常支撑数据的操作。以下是我的一台es服务的配置:

##################### Elasticsearch Configuration Example #####################

# This file contains an overview of various configuration settings,
# targeted at operations staff. Application developers should
# consult the guide at <http://elasticsearch.org/guide>.
#
# The installation procedure is covered at
# <http://elasticsearch.org/guide/en/elasticsearch/reference/current/setup.html>.
#
# Elasticsearch comes with reasonable defaults for most settings,
# so you can try it out without bothering with configuration.
#
# Most of the time, these defaults are just fine for running a production
# cluster. If you're fine-tuning your cluster, or wondering about the
# effect of certain configuration option, please _do ask_ on the
# mailing list or IRC channel [http://elasticsearch.org/community].

# Any element in the configuration can be replaced with environment variables
# by placing them in ${...} notation. For example:
#
#node.rack: ${RACK_ENV_VAR}

# For information on supported formats and syntax for the config file, see
# <http://elasticsearch.org/guide/en/elasticsearch/reference/current/setup-configuration.html>


################################### Cluster ###################################

# Cluster name identifies your cluster for auto-discovery. If you're running
# multiple clusters on the same network, make sure you're using unique names.
#
cluster.name: elasticsearchForTest


#################################### Node #####################################

# Node names are generated dynamically on startup, so you're relieved
# from configuring them manually. You can tie this node to a specific name:
#
node.name: "test"

# Every node can be configured to allow or deny being eligible as the master,
# and to allow or deny to store the data.
#
# Allow this node to be eligible as a master node (enabled by default):
#
#node.master: true
#
# Allow this node to store data (enabled by default):
#
#node.data: true

# You can exploit these settings to design advanced cluster topologies.
#
# 1. You want this node to never become a master node, only to hold data.
#    This will be the "workhorse" of your cluster.
#
#node.master: false
#node.data: true
#
# 2. You want this node to only serve as a master: to not store any data and
#    to have free resources. This will be the "coordinator" of your cluster.
#
#node.master: true
#node.data: false
#
# 3. You want this node to be neither master nor data node, but
#    to act as a "search load balancer" (fetching data from nodes,
#    aggregating results, etc.)
#
node.master: false
node.data: false

# Use the Cluster Health API [http://localhost:9200/_cluster/health], the
# Node Info API [http://localhost:9200/_nodes] or GUI tools
# such as <http://www.elasticsearch.org/overview/marvel/>,
# <http://github.com/karmi/elasticsearch-paramedic>,
# <http://github.com/lukas-vlcek/bigdesk> and
# <http://mobz.github.com/elasticsearch-head> to inspect the cluster state.

# A node can have generic attributes associated with it, which can later be used
# for customized shard allocation filtering, or allocation awareness. An attribute
# is a simple key value pair, similar to node.key: value, here is an example:
#
#node.rack: rack314

# By default, multiple nodes are allowed to start from the same installation location
# to disable it, set the following:
#node.max_local_storage_nodes: 1


#################################### Index ####################################

# You can set a number of options (such as shard/replica options, mapping
# or analyzer definitions, translog settings, ...) for indices globally,
# in this file.
#
# Note, that it makes more sense to configure index settings specifically for
# a certain index, either when creating it or by using the index templates API.
#
# See <http://elasticsearch.org/guide/en/elasticsearch/reference/current/index-modules.html> and
# <http://elasticsearch.org/guide/en/elasticsearch/reference/current/indices-create-index.html>
# for more information.

# Set the number of shards (splits) of an index (5 by default):
#
#index.number_of_shards: 5

# Set the number of replicas (additional copies) of an index (1 by default):
#
#index.number_of_replicas: 1

# Note, that for development on a local machine, with small indices, it usually
# makes sense to "disable" the distributed features:
#
#index.number_of_shards: 1
#index.number_of_replicas: 0

# These settings directly affect the performance of index and search operations
# in your cluster. Assuming you have enough machines to hold shards and
# replicas, the rule of thumb is:
#
# 1. Having more *shards* enhances the _indexing_ performance and allows to
#    _distribute_ a big index across machines.
# 2. Having more *replicas* enhances the _search_ performance and improves the
#    cluster _availability_.
#
# The "number_of_shards" is a one-time setting for an index.
#
# The "number_of_replicas" can be increased or decreased anytime,
# by using the Index Update Settings API.
#
# Elasticsearch takes care about load balancing, relocating, gathering the
# results from nodes, etc. Experiment with different settings to fine-tune
# your setup.

# Use the Index Status API (<http://localhost:9200/A/_status>) to inspect
# the index status.


#################################### Paths ####################################

# Path to directory containing configuration (this file and logging.yml):
#
#path.conf: /path/to/conf

# Path to directory where to store index data allocated for this node.
#
#path.data: /path/to/data
#
# Can optionally include more than one location, causing data to be striped across
# the locations (a la RAID 0) on a file level, favouring locations with most free
# space on creation. For example:
#
#path.data: /path/to/data1,/path/to/data2

# Path to temporary files:
#
#path.work: /path/to/work

# Path to log files:
#
#path.logs: /path/to/logs

# Path to where plugins are installed:
#
#path.plugins: /path/to/plugins


#################################### Plugin ###################################

# If a plugin listed here is not installed for current node, the node will not start.
#
#plugin.mandatory: mapper-attachments,lang-groovy


################################### Memory ####################################

# Elasticsearch performs poorly when JVM starts swapping: you should ensure that
# it _never_ swaps.
#
# Set this property to true to lock the memory:
#
#bootstrap.mlockall: true

# Make sure that the ES_MIN_MEM and ES_MAX_MEM environment variables are set
# to the same value, and that the machine has enough memory to allocate
# for Elasticsearch, leaving enough memory for the operating system itself.
#
# You should also make sure that the Elasticsearch process is allowed to lock
# the memory, eg. by using `ulimit -l unlimited`.


############################## Network And HTTP ###############################

# Elasticsearch, by default, binds itself to the 0.0.0.0 address, and listens
# on port [9200-9300] for HTTP traffic and on port [9300-9400] for node-to-node
# communication. (the range means that if the port is busy, it will automatically
# try the next port).

# Set the bind address specifically (IPv4 or IPv6):
#
#network.bind_host: 192.168.0.1

# Set the address other nodes will use to communicate with this node. If not
# set, it is automatically derived. It must point to an actual IP address.
#
#network.publish_host: 192.168.0.1

# Set both 'bind_host' and 'publish_host':
#
#network.host: 192.168.0.1

# Set a custom port for the node to node communication (9300 by default):
#
#transport.tcp.port: 9300

# Enable compression for all communication between nodes (disabled by default):
#
#transport.tcp.compress: true

# Set a custom port to listen for HTTP traffic:
#
#http.port: 9200

# Set a custom allowed content length:
#
#http.max_content_length: 100mb

# Disable HTTP completely:
#
#http.enabled: false


################################### Gateway ###################################

# The gateway allows for persisting the cluster state between full cluster
# restarts. Every change to the state (such as adding an index) will be stored
# in the gateway, and when the cluster starts up for the first time,
# it will read its state from the gateway.

# There are several types of gateway implementations. For more information, see
# <http://elasticsearch.org/guide/en/elasticsearch/reference/current/modules-gateway.html>.

# The default gateway type is the "local" gateway (recommended):
#
#gateway.type: local

# Settings below control how and when to start the initial recovery process on
# a full cluster restart (to reuse as much local data as possible when using shared
# gateway).

# Allow recovery process after N nodes in a cluster are up:
#
#gateway.recover_after_nodes: 1

# Set the timeout to initiate the recovery process, once the N nodes
# from previous setting are up (accepts time value):
#
#gateway.recover_after_time: 5m

# Set how many nodes are expected in this cluster. Once these N nodes
# are up (and recover_after_nodes is met), begin recovery process immediately
# (without waiting for recover_after_time to expire):
#
#gateway.expected_nodes: 2


############################# Recovery Throttling #############################

# These settings allow to control the process of shards allocation between
# nodes during initial recovery, replica allocation, rebalancing,
# or when adding and removing nodes.

# Set the number of concurrent recoveries happening on a node:
#
# 1. During the initial recovery
#
#cluster.routing.allocation.node_initial_primaries_recoveries: 4
#
# 2. During adding/removing nodes, rebalancing, etc
#
#cluster.routing.allocation.node_concurrent_recoveries: 2

# Set to throttle throughput when recovering (eg. 100mb, by default 20mb):
#
#indices.recovery.max_bytes_per_sec: 20mb

# Set to limit the number of open concurrent streams when
# recovering a shard from a peer:
#
#indices.recovery.concurrent_streams: 5


################################## Discovery ##################################

# Discovery infrastructure ensures nodes can be found within a cluster
# and master node is elected. Multicast discovery is the default.

# Set to ensure a node sees N other master eligible nodes to be considered
# operational within the cluster. Its recommended to set it to a higher value
# than 1 when running more than 2 nodes in the cluster.
#
#discovery.zen.minimum_master_nodes: 1

# Set the time to wait for ping responses from other nodes when discovering.
# Set this option to a higher value on a slow or congested network
# to minimize discovery failures:
#
discovery.zen.ping.timeout: 60s

# For more information, see
# <http://elasticsearch.org/guide/en/elasticsearch/reference/current/modules-discovery-zen.html>

# Unicast discovery allows to explicitly control which nodes will be used
# to discover the cluster. It can be used when multicast is not present,
# or to restrict the cluster communication-wise.
#
# 1. Disable multicast discovery (enabled by default):
#
#discovery.zen.ping.multicast.enabled: false
#
# 2. Configure an initial list of master nodes in the cluster
#    to perform discovery when new nodes (master or data) are started:
#
discovery.zen.ping.unicast.hosts: ["localhost:9400", "localhost:9500"]

# EC2 discovery allows to use AWS EC2 API in order to perform discovery.
#
# You have to install the cloud-aws plugin for enabling the EC2 discovery.
#
# For more information, see
# <http://elasticsearch.org/guide/en/elasticsearch/reference/current/modules-discovery-ec2.html>
#
# See <http://elasticsearch.org/tutorials/elasticsearch-on-ec2/>
# for a step-by-step tutorial.

# GCE discovery allows to use Google Compute Engine API in order to perform discovery.
#
# You have to install the cloud-gce plugin for enabling the GCE discovery.
#
# For more information, see <https://github.com/elasticsearch/elasticsearch-cloud-gce>.

# Azure discovery allows to use Azure API in order to perform discovery.
#
# You have to install the cloud-azure plugin for enabling the Azure discovery.
#
# For more information, see <https://github.com/elasticsearch/elasticsearch-cloud-azure>.

################################## Slow Log ##################################

# Shard level query and fetch threshold logging.

#index.search.slowlog.threshold.query.warn: 10s
#index.search.slowlog.threshold.query.info: 5s
#index.search.slowlog.threshold.query.debug: 2s
#index.search.slowlog.threshold.query.trace: 500ms

#index.search.slowlog.threshold.fetch.warn: 1s
#index.search.slowlog.threshold.fetch.info: 800ms
#index.search.slowlog.threshold.fetch.debug: 500ms
#index.search.slowlog.threshold.fetch.trace: 200ms

#index.indexing.slowlog.threshold.index.warn: 10s
#index.indexing.slowlog.threshold.index.info: 5s
#index.indexing.slowlog.threshold.index.debug: 2s
#index.indexing.slowlog.threshold.index.trace: 500ms

################################## GC Logging ################################

#monitor.jvm.gc.young.warn: 1000ms
#monitor.jvm.gc.young.info: 700ms
#monitor.jvm.gc.young.debug: 400ms

#monitor.jvm.gc.old.warn: 10s
#monitor.jvm.gc.old.info: 5s
#monitor.jvm.gc.old.debug: 2s

在这里我罗列一下,三台es服务配置的不同点:

1、第一台服务,作为负载均衡,我的程序连接的是本台的地址和端口,所以我将它的端口均不做修改,使用默认状态,只修改的配置信息如下:

cluster.name: elasticsearchForTest
node.name: "test"
node.master: false
node.data: false
discovery.zen.ping.timeout: 60s
discovery.zen.ping.unicast.hosts: ["localhost:9400", "localhost:9500"]

2、第二台服务,作为可选主节点和数据存储功能来进行配置:

cluster.name: elasticsearchForTest
node.name: "test01"
transport.tcp.port: 9500
http.port: 9202
discovery.zen.ping.timeout: 60s
discovery.zen.ping.unicast.hosts: ["localhost:9400", "localhost:9500"]

3、第三台服务,作为可选主节点和数据存储功能来进行配置:

cluster.name: elasticsearchForTest
node.name: "test02"
transport.tcp.port: 9400
http.port: 9201
discovery.zen.ping.timeout: 60s
discovery.zen.ping.unicast.hosts: ["localhost:9400", "localhost:9500"]

4、配置完成后,分别启动三台es服务,进入解压目录下的bin目录中:


启动后,使用es的plugin-head进行数据的查看,期间我创建了两个索引库:


5个分片,1个副本,创建好后可见下图显示:


可以看到,主节点现在是test01这台es服务,我想测试万一test01服务挂掉,test02是否会成功被选举为主节点(master)。关闭test01服务,test和test02会显示如下信息:


表示已经被选举成功,将test02选举为主节点,见下图:


并且其中的数据量并没有发生变化,表示此集群测试成功。

然后我通过程序插入了一条数据,之后,启动刚才关闭的test01服务,发现数据依然同步到了test01服务中。再关闭刚才的主节点test02,可以看到,又将test01选为主节点,并且数据量没有发生变化。

若是以后增加了节点,需要修改副本数量,可以进行不停机的动态设置。

https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-update-settings.html

我从官网的这里找到修改的方法,可以使用插件提供的可视化界面操作,也可以使用curl进行操作:


以上是我今天学习es集群的收获,希望能够帮到也正在研究这部分内容的伙伴们。

Logo

更多推荐