前面几篇关于consul的文章简单的介绍了windows下安装consul以及consul作为注册中心和配置中心的简单使用,基于前面的基础,这里介绍下载linux下安装consul以及结合docker搭建consul集群,解决consul配置的数据无法保存的问题。

目录

 

目录

一,下载安装consul

https://www.consul.io/downloads.html


https://www.consul.io/downloads.html


https://www.consul.io/downloads.html

选择linux的版本的consul进行下载

 

 

二,解压安装

     把下载的linux下的安装包consul拷贝到linux环境里面,使用unzip进行解压:

如果linux下面没有unzip命令,则使用yum unstall unzip命令进行安装

 

1,解压完成以后,把解压后的文件拷贝到/usr/local/consul目录下

2,配置环境变量

vi /etc/profile

配置如下:

export JAVA_HOME=/usr/local/jdk1.8.0_172
export MAVEN_HOME=/usr/local/apache-maven-3.5.4
export CONSUL_HOME=/usr/local/consul

export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$JAVA_HOME/bin:$MAVEN_HOME/bin:$CONSUL_HOME:$PATH

上面的CONSUL_HOME就是consul的路径,上面的配置仅供参考。

进行了配置以后,退出保存修改,使用下面的命令使配置生效:

source /etc/profile

   这样进行配置以后,我们就可以方便在任何地方使用consul命令了

 

三,测试安装结果

1,查看安装的consul版本

[root@iZbp1dmlbagds9s70r8luxZ local]# consul -v
Consul v1.2.1
Protocol 2 spoken by default, understands 2 to 3 (agent will automatically use protocol >2 when speaking to compatible agents)
[root@iZbp1dmlbagds9s70r8luxZ local]# 

 

2,以开发模式启动consul

[root@iZbp1dmlbagds9s70r8luxZ local]# consul agent -dev
==> Starting Consul agent...
==> Consul agent running!
           Version: 'v1.2.1'
           Node ID: '344af5b1-8914-41d6-f7b2-3143d025f493'
         Node name: 'iZbp1dmlbagds9s70r8luxZ'
        Datacenter: 'dc1' (Segment: '<all>')
            Server: true (Bootstrap: false)
       Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: -1, DNS: 8600)
      Cluster Addr: 127.0.0.1 (LAN: 8301, WAN: 8302)
           Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false

==> Log data will now stream in as it occurs:

    2018/07/28 09:57:02 [DEBUG] agent: Using random ID "344af5b1-8914-41d6-f7b2-3143d025f493" as node ID
    2018/07/28 09:57:02 [INFO] raft: Initial configuration (index=1): [{Suffrage:Voter ID:344af5b1-8914-41d6-f7b2-3143d025f493 Address:127.0.0.1:8300}]
    2018/07/28 09:57:02 [INFO] serf: EventMemberJoin: iZbp1dmlbagds9s70r8luxZ.dc1 127.0.0.1
    2018/07/28 09:57:02 [INFO] serf: EventMemberJoin: iZbp1dmlbagds9s70r8luxZ 127.0.0.1
    2018/07/28 09:57:02 [INFO] agent: Started DNS server 127.0.0.1:8600 (udp)
    2018/07/28 09:57:02 [INFO] raft: Node at 127.0.0.1:8300 [Follower] entering Follower state (Leader: "")
    2018/07/28 09:57:02 [INFO] consul: Adding LAN server iZbp1dmlbagds9s70r8luxZ (Addr: tcp/127.0.0.1:8300) (DC: dc1)
    2018/07/28 09:57:02 [INFO] consul: Handled member-join event for server "iZbp1dmlbagds9s70r8luxZ.dc1" in area "wan"
    2018/07/28 09:57:02 [DEBUG] agent/proxy: managed Connect proxy manager started
    2018/07/28 09:57:02 [WARN] agent/proxy: running as root, will not start managed proxies
    2018/07/28 09:57:02 [INFO] agent: Started DNS server 127.0.0.1:8600 (tcp)
    2018/07/28 09:57:02 [INFO] agent: Started HTTP server on 127.0.0.1:8500 (tcp)
    2018/07/28 09:57:02 [INFO] agent: started state syncer
    2018/07/28 09:57:02 [WARN] raft: Heartbeat timeout from "" reached, starting election
    2018/07/28 09:57:02 [INFO] raft: Node at 127.0.0.1:8300 [Candidate] entering Candidate state in term 2
    2018/07/28 09:57:02 [DEBUG] raft: Votes needed: 1
    2018/07/28 09:57:02 [DEBUG] raft: Vote granted from 344af5b1-8914-41d6-f7b2-3143d025f493 in term 2. Tally: 1
    2018/07/28 09:57:02 [INFO] raft: Election won. Tally: 1
    2018/07/28 09:57:02 [INFO] raft: Node at 127.0.0.1:8300 [Leader] entering Leader state
    2018/07/28 09:57:02 [INFO] consul: cluster leadership acquired
    2018/07/28 09:57:02 [INFO] consul: New leader elected: iZbp1dmlbagds9s70r8luxZ
    2018/07/28 09:57:02 [INFO] connect: initialized CA with provider "consul"
    2018/07/28 09:57:02 [DEBUG] consul: Skipping self join check for "iZbp1dmlbagds9s70r8luxZ" since the cluster is too small
    2018/07/28 09:57:02 [INFO] consul: member 'iZbp1dmlbagds9s70r8luxZ' joined, marking health alive
    2018/07/28 09:57:02 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically
    2018/07/28 09:57:02 [INFO] agent: Synced node info
    2018/07/28 09:57:04 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically
    2018/07/28 09:57:04 [DEBUG] agent: Node info in sync
    2018/07/28 09:57:04 [DEBUG] agent: Node info in sync

 

    输出如上所示,http端口为8500,dns端口为8600,绑定的本地ip为127.0.0.1,如果consul需要端口需要被外部访问需要开发8500端口和8600端口,可以参照:

centos查询端口是不是开放的
firewall-cmd --permanent --query-port=8500/tcp
#添加对外开放端口
firewall-cmd --permanent --add-port=8500/tcp

#重启防火墙
firewall-cmd --reload

 

四:consul的参数知识

参考:参考链接

consul agent常用命令解读
-data-dir :
作用:指定agent储存状态的数据目录,这是所有agent都必须的,对server尤其重要,因为他们必须持久化集群的状态

-config-dir :
作用:指定service的配置文件和检查定义所在的位置。目录必需为consul.d,文件内容都是json格式的数据。配置详解见官方

-config-file 
作用:指定一个要装载的配置文件

-dev :
作用:开发服务器模式,虽然是server模式,但不用于生产环境,因为不会有任何持久化操作,即不会有任何数据写入到磁盘

-bootstrap-expect 
作用: 参数表明该服务运行时最低开始进行选举的节点数,当设置为1时,则意味允许节点为一个时也进行选举;当设置为3时,则等到3台节点同时运行consul并加入到server才能参与选举,选举完集群才能够正常工作。 一般建议服务器结点3-5个。

-node :
作用:指定节点在集群中的名称,该名称在集群中必须是唯一的(默认这是机器的主机名),直接采用机器的IP

-bind :
作用:指明节点的IP地址,一般是0.0.0.0或者云服务器内网地址,不能写阿里云外网地址。这是Consul侦听的地址,它必须可以被集群中的所有其他节点访问。虽然绑定地址不是绝对必要的,但最好提供一个。

-server :
作用:指定节点为server,每个数据中心(DC)的server数推荐3-5个。

-client :
作用:指定节点为client,指定客户端接口的绑定地址,包括:HTTP、DNS、RPC 
默认是127.0.0.1,只允许回环接口访问

-datacenter :
作用:指定机器加入到哪一个数据中心中。老版本叫-dc,-dc已经失效

 

consul概念:

Agent: Consul集群中长时间运行的守护进程,以consul agent 命令开始启动. 在客户端和服务端模式下都可以运行,可以运行DNS或者HTTP接口, 它的主要作用是运行时检查和保持服务同步。 
Client: 客户端, 无状态, 以一个极小的消耗将接口请求转发给局域网内的服务端集群. 
Server: 服务端, 保存配置信息, 高可用集群, 在局域网内与本地客户端通讯, 通过广域网与其他数据中心通讯. 每个数据中心的 server 数量推荐为 3 个或是 5 个. 
Datacenter: 数据中心,多数据中心联合工作保证数据存储安全快捷 
Consensus: 一致性协议使用的是Raft Protocol 
RPC: 远程程序通信 
Gossip: 基于 Serf 实现的 gossip 协议,负责成员、失败探测、事件广播等。通过 UDP 实现各个节点之间的消息。分为 LAN 上的和 WAN 上的两种情形。

1,参数案例

前面我们使用consul agent -dev启动的consul在云服务器上是不能被外部访问的,那么要被外部访问我们需要加参数,参照如下:

consul agent -dev -http-port 8500 -client 0.0.0.0

参数说明:

-client 0.0.0.0:表明不是绑定的不是默认的127.0.0.1地址,可以通过公网进行访问

-http-port 8500:通过该参数可以修改consul启动的http端口

 

2,查看consul的集群信息:

[root@iZbp1dmlbagds9s70r8luxZ local]# consul members
Node                     Address         Status  Type    Build  Protocol  DC   Segment
iZbp1dmlbagds9s70r8luxZ  127.0.0.1:8301  alive   server  1.2.1  2         dc1  <all>
[root@iZbp1dmlbagds9s70r8luxZ local]# 

node:节点名
Address:节点地址
Status:alive表示节点健康
Type:server运行状态是server状态
DC:dc1表示该节点属于DataCenter1

 

3,以server模式启动consul

          前面都是以-dev开发模式启动的consul,该模式启动的consul作为配置中心的时候,配置数据是不能保存的,不能进行持久化,需要进行持久化,则需要以服务模式启动

    

[root@iZbp1dmlbagds9s70r8luxZ local]# consul agent -server -ui -bootstrap-expect=1 -data-dir=/usr/local/consul/data -node=agent-one -advertise=47.98.112.71 -bind=0.0.0.0 -client=0.0.0.0
BootstrapExpect is set to 1; this is the same as Bootstrap mode.
bootstrap = true: do not enable unless necessary
==> Starting Consul agent...
==> Consul agent running!
           Version: 'v1.2.1'
           Node ID: '04b82369-8b5b-19f3-ab0d-6a82266a2110'
         Node name: 'agent-one'
        Datacenter: 'dc1' (Segment: '<all>')
            Server: true (Bootstrap: true)
       Client Addr: [0.0.0.0] (HTTP: 8500, HTTPS: -1, DNS: 8600)
      Cluster Addr: 47.98.112.71 (LAN: 8301, WAN: 8302)
           Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false

==> Log data will now stream in as it occurs:

    2018/07/28 10:54:02 [INFO] raft: Initial configuration (index=1): [{Suffrage:Voter ID:04b82369-8b5b-19f3-ab0d-6a82266a2110 Address:47.98.112.71:8300}]
    2018/07/28 10:54:02 [WARN] memberlist: Binding to public address without encryption!
    2018/07/28 10:54:02 [INFO] serf: EventMemberJoin: agent-one.dc1 47.98.112.71
    2018/07/28 10:54:02 [WARN] memberlist: Binding to public address without encryption!
    2018/07/28 10:54:02 [INFO] serf: EventMemberJoin: agent-one 47.98.112.71
    2018/07/28 10:54:02 [INFO] agent: Started DNS server 0.0.0.0:8600 (udp)
    2018/07/28 10:54:02 [INFO] raft: Node at 47.98.112.71:8300 [Follower] entering Follower state (Leader: "")
    2018/07/28 10:54:02 [INFO] consul: Adding LAN server agent-one (Addr: tcp/47.98.112.71:8300) (DC: dc1)
    2018/07/28 10:54:02 [INFO] consul: Handled member-join event for server "agent-one.dc1" in area "wan"
    2018/07/28 10:54:02 [WARN] agent/proxy: running as root, will not start managed proxies
    2018/07/28 10:54:02 [INFO] agent: Started DNS server 0.0.0.0:8600 (tcp)
    2018/07/28 10:54:02 [INFO] agent: Started HTTP server on [::]:8500 (tcp)
    2018/07/28 10:54:02 [INFO] agent: started state syncer
    2018/07/28 10:54:08 [WARN] raft: Heartbeat timeout from "" reached, starting election
    2018/07/28 10:54:08 [INFO] raft: Node at 47.98.112.71:8300 [Candidate] entering Candidate state in term 2
    2018/07/28 10:54:08 [INFO] raft: Election won. Tally: 1
    2018/07/28 10:54:08 [INFO] raft: Node at 47.98.112.71:8300 [Leader] entering Leader state
    2018/07/28 10:54:08 [INFO] consul: cluster leadership acquired
    2018/07/28 10:54:08 [INFO] consul: New leader elected: agent-one
    2018/07/28 10:54:08 [INFO] consul: member 'agent-one' joined, marking health alive
    2018/07/28 10:54:08 [INFO] agent: Synced node info
    2018/07/28 10:54:11 [WARN] consul: error getting server health from "agent-one": context deadline exceeded
    2018/07/28 10:54:12 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:14 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:16 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:18 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:20 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:20 [WARN] consul: error getting server health from "agent-one": rpc error getting client: failed to get conn: dial tcp <nil>->47.98.112.71:8300: i/o timeout
    2018/07/28 10:54:23 [WARN] consul: error getting server health from "agent-one": context deadline exceeded
    2018/07/28 10:54:24 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:26 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:28 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:30 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:32 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:32 [WARN] consul: error getting server health from "agent-one": rpc error getting client: failed to get conn: dial tcp <nil>->47.98.112.71:8300: i/o timeout
    2018/07/28 10:54:35 [WARN] consul: error getting server health from "agent-one": context deadline exceeded
    2018/07/28 10:54:36 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:38 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:40 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:42 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:44 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:44 [WARN] consul: error getting server health from "agent-one": rpc error getting client: failed to get conn: dial tcp <nil>->47.98.112.71:8300: i/o timeout
    2018/07/28 10:54:47 [WARN] consul: error getting server health from "agent-one": context deadline exceeded
    2018/07/28 10:54:48 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:50 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:52 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:54 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:56 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:56 [WARN] consul: error getting server health from "agent-one": rpc error getting client: failed to get conn: dial tcp <nil>->47.98.112.71:8300: i/o timeout
    2018/07/28 10:54:59 [WARN] consul: error getting server health from "agent-one": context deadline exceeded
    2018/07/28 10:55:00 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:55:02 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:55:04 [WARN] consul: error getting server health from "agent-one": last request still outstanding
^C    2018/07/28 10:55:10 [INFO] agent: Caught signal:  interrupt
    2018/07/28 10:55:10 [INFO] agent: Graceful shutdown disabled. Exiting
    2018/07/28 10:55:10 [INFO] agent: Requesting shutdown
    2018/07/28 10:55:10 [INFO] consul: shutting down server
    2018/07/28 10:55:10 [WARN] serf: Shutdown without a Leave
    2018/07/28 10:55:10 [WARN] serf: Shutdown without a Leave
    2018/07/28 10:55:10 [INFO] manager: shutting down
    2018/07/28 10:55:10 [INFO] agent: consul server down
    2018/07/28 10:55:10 [INFO] agent: shutdown complete
    2018/07/28 10:55:10 [INFO] agent: Stopping DNS server 0.0.0.0:8600 (tcp)
    2018/07/28 10:55:10 [INFO] agent: Stopping DNS server 0.0.0.0:8600 (udp)
    2018/07/28 10:55:10 [INFO] agent: Stopping HTTP server [::]:8500 (tcp)
    2018/07/28 10:55:11 [WARN] agent: Timeout stopping HTTP server [::]:8500 (tcp)
    2018/07/28 10:55:11 [INFO] agent: Waiting for endpoints to shut down
    2018/07/28 10:55:11 [INFO] agent: Endpoints down
    2018/07/28 10:55:11 [INFO] agent: Exit code: 1
[root@iZbp1dmlbagds9s70r8luxZ local]# 
[root@iZbp1dmlbagds9s70r8luxZ local]# consul agent -server -ui -bootstrap-expect=1 -data-dir=/usr/local/consul/data -node=agent-one -advertise=47.98.112.71 -bind=0.0.0.0 -client=0.0.0.0
BootstrapExpect is set to 1; this is the same as Bootstrap mode.
bootstrap = true: do not enable unless necessary
==> Starting Consul agent...
==> Consul agent running!
           Version: 'v1.2.1'
           Node ID: '04b82369-8b5b-19f3-ab0d-6a82266a2110'
         Node name: 'agent-one'
        Datacenter: 'dc1' (Segment: '<all>')
            Server: true (Bootstrap: true)
       Client Addr: [0.0.0.0] (HTTP: 8500, HTTPS: -1, DNS: 8600)
      Cluster Addr: 47.98.112.71 (LAN: 8301, WAN: 8302)
           Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false

==> Log data will now stream in as it occurs:

    2018/07/28 10:55:15 [INFO] raft: Initial configuration (index=1): [{Suffrage:Voter ID:04b82369-8b5b-19f3-ab0d-6a82266a2110 Address:47.98.112.71:8300}]
    2018/07/28 10:55:15 [WARN] memberlist: Binding to public address without encryption!
    2018/07/28 10:55:15 [INFO] serf: EventMemberJoin: agent-one.dc1 47.98.112.71
    2018/07/28 10:55:15 [WARN] memberlist: Binding to public address without encryption!
    2018/07/28 10:55:15 [INFO] serf: EventMemberJoin: agent-one 47.98.112.71
    2018/07/28 10:55:15 [INFO] agent: Started DNS server 0.0.0.0:8600 (udp)
    2018/07/28 10:55:15 [INFO] raft: Node at 47.98.112.71:8300 [Follower] entering Follower state (Leader: "")
    2018/07/28 10:55:15 [WARN] serf: Failed to re-join any previously known node
    2018/07/28 10:55:15 [WARN] serf: Failed to re-join any previously known node
    2018/07/28 10:55:15 [INFO] consul: Adding LAN server agent-one (Addr: tcp/47.98.112.71:8300) (DC: dc1)
    2018/07/28 10:55:15 [INFO] consul: Handled member-join event for server "agent-one.dc1" in area "wan"
    2018/07/28 10:55:15 [WARN] agent/proxy: running as root, will not start managed proxies
    2018/07/28 10:55:15 [INFO] agent: Started DNS server 0.0.0.0:8600 (tcp)
    2018/07/28 10:55:15 [INFO] agent: Started HTTP server on [::]:8500 (tcp)
    2018/07/28 10:55:15 [INFO] agent: started state syncer
    2018/07/28 10:55:21 [WARN] raft: Heartbeat timeout from "" reached, starting election
    2018/07/28 10:55:21 [INFO] raft: Node at 47.98.112.71:8300 [Candidate] entering Candidate state in term 3
    2018/07/28 10:55:21 [INFO] raft: Election won. Tally: 1
    2018/07/28 10:55:21 [INFO] raft: Node at 47.98.112.71:8300 [Leader] entering Leader state
    2018/07/28 10:55:21 [INFO] consul: cluster leadership acquired
    2018/07/28 10:55:21 [INFO] consul: New leader elected: agent-one
    2018/07/28 10:55:21 [INFO] agent: Synced node info

启动命令:

consul agent -server -ui -bootstrap-expect=1 -data-dir=/usr/local/consul/data -node=agent-one -
advertise=47.98.112.71 -bind=0.0.0.0 -client=0.0.0.0

参数说明:

-server:服务器模式 
-ui:能webui展示 
-bootstrap-expect:server为1时即选择server集群leader 
-data-dir:consul状态存储文件地址 
-node:指定结点名 
advertise:本地ip地址 
-client:指定可访问这个服务结点的ip 

 

上面是以服务模式启动的输出,注意需要开放8300端口,consul需要进行节点通信使用

在此查看成员信息:

[root@iZbp1dmlbagds9s70r8luxZ data]# consul members
Node       Address            Status  Type    Build  Protocol  DC   Segment
agent-one  47.98.112.71:8301  alive   server  1.2.1  2         dc1  <all>
[root@iZbp1dmlbagds9s70r8luxZ data]# 

 

4,加入集群

    加入集群的命令:consul join xx.xx.xx.xx

 

五,consul集群搭建

    使用docker容器来搭建consul集群,编写Docker compose

集群说明
1,3 server 节点(consul-server1 ~ 3)和 2 node 节点(consul-node1 ~ 2)
2,映射本地 consul/data1 ~ 3/ 目录到 Docker 容器中,避免 Consul 集群重启后数据丢失。
3,Consul web http 端口分别为 8501、8502、8503

新建:docker-compose.yml

version: '2.0'
services:
  consul-server1:
    image: consul:latest
    hostname: "consul-server1"
    ports:
      - "8501:8500"
    volumes:
      - ./consul/data1:/consul/data
    command: "agent -server -bootstrap-expect 3 -ui -disable-host-node-id -client 0.0.0.0"
  consul-server2:
    image: consul:latest
    hostname: "consul-server2"
    ports:
      - "8502:8500"
    volumes:
      - ./consul/data2:/consul/data
    command: "agent -server -ui -join consul-server1 -disable-host-node-id -client 0.0.0.0"
    depends_on: 
      - consul-server1
  consul-server3:
    image: consul:latest
    hostname: "consul-server3"
    ports:
      - "8503:8500"
    volumes:
      - ./consul/data3:/consul/data
    command: "agent -server -ui -join consul-server1 -disable-host-node-id -client 0.0.0.0"
    depends_on:
      - consul-server1
  consul-node1:
    image: consul:latest
    hostname: "consul-node1"
    command: "agent -join consul-server1 -disable-host-node-id"
    depends_on:
      - consul-server1
  consul-node2:
    image: consul:latest
    hostname: "consul-node2"
    command: "agent -join consul-server1 -disable-host-node-id"
    depends_on:
      - consul-server1

集群启动时默认以 consul-server1 为 leader,然后 server2 ~ 3 和 node1 ~ 2 加入到该集群。当 server1 出现故障下线是,server2 ~ 3 则会进行选举选出新leader。

集群操作
创建并启动集群:docker-compose up -d
停止整个集群:docker-compose stop
启动集群:docker-compose start
清除整个集群:docker-compose rm(注意:需要先停止)
访问
http://localhost:8501
http://localhost:8502
http://localhost:8503

 

上面搭建了consul集群了,那么怎么进行负载均衡了,使用ngxin:

设定负载均衡服务器列表:

upstream consul {
    server 127.0.0.1:8501;
    server 127.0.0.1:8502;
    server 127.0.0.1:8503;
}

服务配置:

server {
    listen       80;
    server_name  consul.test.com;#服务域名,需要填写你的服务域名

    location / {
        proxy_pass  http://consul;#请求转向consul服务器列表
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

参考:docker搭建consul集群参考

 

    总结:

到这里consul的相关知识基本结束了,但是这只是consul的基本使用,扩展的还需要进行consul客户端的二次开发,比如自定义consul的服务注册与发现,consul作为配置中心存在配置数据丢失的情况,怎么进行配置信息的自动备份找回。有空的话在对这一块的扩展进行深入的学习探讨。

   相关文章:

Consul1-window安装consul

Consul2-使用consul作为服务注册和发现中心

Consul3-使用consul作为配置中心

   之前自己写了一个控制器来作为服务健康检查其实是没必要的,springboot已经为我们解决了,参考代码:

采用springboot的健康检查,consul健康检查修改

欢迎加群学习交流:331227121

 

Logo

权威|前沿|技术|干货|国内首个API全生命周期开发者社区

更多推荐