'注:backup-binlog目录一定要在容器启动时就已经创建好'

0.两个集群信息

集群1:
主节点            -->  从节点                       
127.0.0.1:8001   --> 127.0.0.1:8004     
127.0.0.1:8002   --> 127.0.0.1:8005     
127.0.0.1:8003   --> 127.0.0.1:8006     
主节点backup目录                 -->  从节点backup目录:
/tendisplus1/store/backup-binlog --> /tendisplus4/store/backup-binlog
/tendisplus2/store/backup-binlog --> /tendisplus5/store/backup-binlog
/tendisplus3/store/backup-binlog --> /tendisplus6/store/backup-binlog
集群2 (从节点先不创建,只创建主节点即可):
主节点            --> 从节点                       
127.0.0.2:8001   --> 127.0.0.2:8004     
127.0.0.2:8002   --> 127.0.0.2:8005     
127.0.0.2:8003   --> 127.0.0.2:8006     
主节点backup目录                 --> 从节点backup目录:
/tendisplus1/store/backup-binlog --> /tendisplus4/store/backup-binlog
/tendisplus2/store/backup-binlog --> /tendisplus5/store/backup-binlog
/tendisplus3/store/backup-binlog --> /tendisplus6/store/backup-binlog

1.集群1中的主节点上执行backup $dir

记录下当前执行的时间戳(timestamp1)或者是binlogid,若是记录binlogid需要记录每个节点0-9个store的binlogid(执行binlogpos $storeid)

redis-cli -a 123456 -h 127.0.0.1 -p 8001 backup /tendisplus1/store/backup-binlog

redis-cli -a 123456 -h 127.0.0.1 -p 8002 backup /tendisplus2/store/backup-binlog

redis-cli -a 123456 -h 127.0.0.1 -p 8003 backup /tendisplus3/store/backup-binlog

2.cp 集群1的backup-binlog --> 集群2的backup-binlog

测试时是在本机上测试,线上的话需要运维跨机器复制文件目录

cp 127.0.0.1的/tendisplus1/store/backup-binlog/ 127.0.0.2的/tendisplus1/store/backup-binlog

cp 127.0.0.1的/tendisplus2/store/backup-binlog/ 127.0.0.2的/tendisplus2/store/backup-binlog

cp 127.0.0.1的/tendisplus3/store/backup-binlog/ 127.0.0.2的/tendisplus3/store/backup-binlog

3.在2集群的主节点上执行restorebackup all $dir force

redis-cli -a 123456 -h 127.0.0.2 -p 8001 restorebackup all /tendisplus1/store/backup-binlog force

redis-cli -a 123456 -h 127.0.0.2 -p 8002 restorebackup all /tendisplus2/store/backup-binlog force

redis-cli -a 123456 -h 127.0.0.2 -p 8003 restorebackup all /tendisplus3/store/backup-binlog force

4.加载成功后将2集群的主从建立关系

redis-cli -a 123456 -h 127.0.0.2 -p 8004 cluster replicate 127.0.0.2:8001的nodeid

redis-cli -a 123456 -h 127.0.0.2 -p 8005 cluster replicate 127.0.0.2:8002的nodeid

redis-cli -a 123456 -h 127.0.0.2 -p 8006 cluster replicate 127.0.0.2:8003的nodeid

5.想同步后续时间段的数据,需要使用binlog_tool工具进行同步(timestamp1 --> timestamp2),同步只能同步集群1从节点中的binlog文件到集群2的主节点

/tendisplus4/bin/binlog_tool --logfile=/tendisplus4/dump/0-9/*.binlog --mode=base64 --start-datetime=timestamp1 --end-datetime=timestamp2| redis-cli -a 123456 -h 127.0.0.2 -p 8001

/tendisplus5/bin/binlog_tool --logfile=/tendisplus5/dump/0-9/*.binlog --mode=base64 --start-datetime=timestamp1 --end-datetime=timestamp2| redis-cli -a 123456 -h 127.0.0.2 -p 8002

/tendisplus6/bin/binlog_tool --logfile=/tendisplus6/dump/0-9/*.binlog --mode=base64 --start-datetime=timestamp1 --end-datetime=timestamp2| redis-cli -a 123456 -h 127.0.0.2 -p 8003

6.线上的binlog文件只保留7个,需要执行脚本执行binlog_transfer.sh

#!/bin/bash

#!/bin/bash

dir=/tendisplus/store/dump/

for i in {0..9}

do

  fileStr=$(ls $dir$i -rt)

  files=($fileStr)

  for var in ${files[@]}

  do

    echo $dir$i/$var

    /tendisplus/bin/binlog_tool --logfile=$dir$i/$var --mode=base64  --start-timestamp=$1 --end-timestamp=$2 |redis-cli -a $3 -h $4 -p $5

  done

done

 

使用命令: ./binlog_transfer.sh timestamp1 timestamp2 123456 127.0.0.2 8001

Logo

权威|前沿|技术|干货|国内首个API全生命周期开发者社区

更多推荐