1、实验环境

1)三台虚拟(Ubuntu14.04)

2)Docker版本 v-1.12.1

3)三台虚拟机的ip分别是:192.168.110.132(manager)、192.168.110.136(worker1)、192.168.110.139(worker2)

4)分别在三台虚拟机的/etc/default/docker文件中添加如下内容:

192.168.110.132:

<span style="font-size:18px;">DOCKER_OPTS="--label com.example.manager=manager -H tcp://0.0.0.0:2375 -H tcp://0.0.0.0:5678 -H unix:///var/run/docker.sock"
</span>
192.168.110.136:

<span style="font-size:18px;">DOCKER_OPTS="--label com.example.worker=worker1 -H tcp://0.0.0.0:2375 -H tcp://0.0.0.0:5678 -H unix:///var/run/docker.sock"
</span>
192.168.110.139:

<span style="font-size:18px;">DOCKER_OPTS="--label com.example.worker=worker -H tcp://0.0.0.0:2375 -H tcp://0.0.0.0:5678 -H unix:///var/run/docker.sock"</span><span style="font-size:24px;">
</span>
2、 在manager上操作

1)创建overlay网络,网络名称是mywork,创建命令如下:

docker network create --driver overlay mywork

2)创建数据卷,数据卷的名称分别是:myvolume,logstash_conf和logstash_data,创建的命令分别是:

docker volume create --name myvolume
docker volume create --name logstash_conf
docker volume create --name logstash_data
3 )swarm init 相关命令如下

docker swarm init --advertise-addr 192.168.110.132
如果成功,会提示如下信息:

Swarm initialized: current node (bvz81updecsj6wjz393c09vti) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join \
    --token SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-1awxwuwd3z9j1z3puu7rcgdbx \
    192.168.110.132:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

3、在worker1上操作

1)swarm join

docker swarm join --token SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2 192.168.110.132:2377

4、 在worker2上操作

docker swarm join --token SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2 192.168.110.132:2377

5、 继续在manager上进行操作

1)在/home目录下创建/elasticsearch和/logstash文件夹

2)在elasticsearch文件夹中创建elasticsarch.sh文件,文件内容如下:

#!/bin/bash
docker service create \
  --replicas 3 \
  --name elasticsearch \
  --update-delay 10s \
  --update-parallelism 2 \
  --update-failure-action continue \
  --publish 9200:9200 \
  --publish 9300:9300 \
  --network mywork \
  --env ES_CLUSTERNAME=elasticsearch \
  --mount type=volume,src=myvolume,dst=/usr/share/elasticsearch/data \
  elasticsearch \
  elasticsearch
2) 在logstash文件夹中创建logstash.sh文件,文件内容如下:
#!/bin/bash
docker service create \
  --replicas 3 \
  --name logstash \
  --update-delay 10s \
  --update-parallelism 2 \
  --publish 25826:25826 \
  --publish 25826:25826/udp \
  --network mywork \
  --mount type=volume,src=logstash_conf,dst=/conf/ \
  --mount type=volume,src=logstash_data,dst=/var/data \
  logstash:latest \
  logstash -f /conf/central.conf
3)在/var/lib/docker/volumes/logstash_conf/_data目录下创建central.conf文件,文件内容如下:

input
{
   file{

      path => "/var/data/test.txt"
   }

}
output{

  stdout{}
  elasticsearch{

     hosts => "elasticsearch:9200"
  }

}
~  
4) 分别在elasticsearch和logstash目录下执行sh elasticsearch.run和sh logstash.run

然后验证结果:

root@dockertest1:/# docker service ls
ID            NAME           REPLICAS  IMAGE            COMMAND
0lickc8n6tgs  elasticsearch  3/3       elasticsearch    elasticsearch
cuk6lies8ven  logstash       3/3       logstash:latest  logstash -f /conf/central.conf

5)在 /var/lib/docker/volumes/logstash_data/_data目录下创建test.txt文件,并在里面添加随意内容,例如1,2,3,4

6)在各个节点上验证elasticsearch是不是已经接收到数据:

root@dockertest2:/# curl 'localhost:9200/_cat/indices?v'
health status index               pri rep docs.count docs.deleted store.size pri.store.size 
yellow open   logstash-2016.09.07   5   1          3            0     10.2kb         10.2kb 

6、实验结束,成功完成。



Logo

权威|前沿|技术|干货|国内首个API全生命周期开发者社区

更多推荐