docker环境redis5.0集群
一、确认redis版本下载一个redis镜像后启动,执行如下命令:root@baccf09b18be:/data# redis-server -v Redis server v=5.0.0 sha=00000000:0 malloc=jemalloc-5.1.0 bits=64 build=9a5fa86bdce33ad2root@baccf09b18be:/data# 确...
一、确认redis版本
下载一个redis镜像后启动,执行如下命令:
root@baccf09b18be:/data# redis-server -v
Redis server v=5.0.0 sha=00000000:0 malloc=jemalloc-5.1.0 bits=64 build=9a5fa86bdce33ad2
root@baccf09b18be:/data#
确保redis是5.0版本
二、准备好配置文件
1.创建文件redis-cluster.tmpl,内容如下:
[root@VM_0_9_centos ~]# cat /home/redis-cluster/redis-cluster.tmpl
port ${PORT}
protected-mode no
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
cluster-announce-ip 123.207.67.174
cluster-announce-port ${PORT}
cluster-announce-bus-port 1${PORT}
appendonly yes
2.在目录下执行命令
for port in `seq 8000 8005`; do mkdir -p ./${port}/conf && PORT=${port} envsubst < ./redis-cluster.tmpl > ./${port}/conf/redis.conf && mkdir -p ./${port}/data; done
三、启动docker开始部署
1.创建docker自定义网络
docker network create redis-net
2.启动redis集群容器
for port in `seq 8000 8005`; do docker run -d -ti -p ${port}:${port} -p 1${port}:1${port} -v /home/redis-cluster/${port}/conf/redis.conf:/usr/local/etc/redis/redis.conf -v /home/redis-cluster/${port}/data:/data --restart always --name redis-${port} --net redis-net --sysctl net.core.somaxconn=1024 redis redis-server /usr/local/etc/redis/redis.conf; done
四、确定好生成容器的ip
1.执行命令
[root@VM_0_9_centos ~]# for port in `seq 8000 8005`; do echo -n "$(docker inspect --format '{{ (index .NetworkSettings.Networks "redis-net").IPAddress }}' "redis-${port}")":${port}" " ; done
172.18.0.2:8000 172.18.0.3:8001 172.18.0.4:8002 172.18.0.5:8003 172.18.0.6:8004 172.18.0.7:8005 [root@VM_0_9_centos ~]#
五、 登陆其中一个容器中执行如下命令
1. 登陆容器
docker exec -it redis-8000 bash
2.在容器中执行
root@01184b2ab5d4:~#
root@01184b2ab5d4:~# redis-cli --cluster create 172.19.0.2:8000 172.19.0.3:8001 172.19.0.4:8002 172.19.0.5:8003 172.19.0.6:8004 172.19.0.7:8005 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 172.19.0.5:8003 to 172.19.0.2:8000
Adding replica 172.19.0.6:8004 to 172.19.0.3:8001
Adding replica 172.19.0.7:8005 to 172.19.0.4:8002
M: 314d0f9a5a41f5652fbd190cf161ab731163a98c 172.19.0.2:8000
slots:[0-5460] (5461 slots) master
M: 464dce6fa83ddeacd75350f471efce99ae48bd39 172.19.0.3:8001
slots:[5461-10922] (5462 slots) master
M: c1a93262707dabde8e470726bf5ac8c57e0aa73f 172.19.0.4:8002
slots:[10923-16383] (5461 slots) master
S: f278a58a2abb6abfc897923700654e64c8c25721 172.19.0.5:8003
replicates 314d0f9a5a41f5652fbd190cf161ab731163a98c
S: cbef537b2d66b9e57c230ba63c16747d70a8f522 172.19.0.6:8004
replicates 464dce6fa83ddeacd75350f471efce99ae48bd39
S: 6fae5c2c1594b408a5a1c546c2ba87f6ab219207 172.19.0.7:8005
replicates c1a93262707dabde8e470726bf5ac8c57e0aa73f
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
...
>>> Performing Cluster Check (using node 172.19.0.2:8000)
M: 314d0f9a5a41f5652fbd190cf161ab731163a98c 172.19.0.2:8000
slots:[0-5460] (5461 slots) master
1 additional replica(s)
S: 6fae5c2c1594b408a5a1c546c2ba87f6ab219207 123.207.67.174:8005
slots: (0 slots) slave
replicates c1a93262707dabde8e470726bf5ac8c57e0aa73f
S: cbef537b2d66b9e57c230ba63c16747d70a8f522 123.207.67.174:8004
slots: (0 slots) slave
replicates 464dce6fa83ddeacd75350f471efce99ae48bd39
M: 464dce6fa83ddeacd75350f471efce99ae48bd39 123.207.67.174:8001
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
S: f278a58a2abb6abfc897923700654e64c8c25721 123.207.67.174:8003
slots: (0 slots) slave
replicates 314d0f9a5a41f5652fbd190cf161ab731163a98c
M: c1a93262707dabde8e470726bf5ac8c57e0aa73f 123.207.67.174:8002
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
root@01184b2ab5d4:~#
更多推荐
所有评论(0)