NFS高可用部署
部署NFS双机热备高可用环境,用作K8S容器集群的远程存储,实现K8S数据持久化
·
NFS高可用目的
部署NFS双机热备高可用环境,用作K8S容器集群的远程存储,实现K8S数据持久化。
NFS高可用思路
NFS + Keepalived 实现高可用,防止单点故障。
Rsync+Inotify 实现主备间共享数据进行同步
技术要求
- 两个NFS节点机器的配置要一致
- keepalived监控nfs进程,master的nfs主进程宕掉无法启动时由slave的nfs接管继续工作。
- k8s数据备份到slave,同时master和slave数据用rsync+inotify实时同步,保证数据完整性。
- 生产环境下,最好给NFS共享目录单独挂载一块硬盘或单独的磁盘分区。
环境准备
### 关闭防火墙
systemctl stop firewalld.service
systemctl disable firewalld.service
###关闭selinux
setenforce 0 ##临时关闭
vi /etc/selinux/config ##永久关闭
NFS高可用部署
一、安装部署NFS服务(Master和Slave两机器同样操作)
1)安装nfs
yum -y install nfs-utils
2)创建nfs共享目录
mkdir /data/k8s_storage
3)编辑export文件,运行k8s的node节点挂载nfs共享目录
vim /etc/exports
/data/k8s_storage 10.90.12.0/24(rw,sync,no_root_squash)
4)配置生效
exportfs -r
5)查看生效
exportfs
6)启动rpcbind、nfs服务
systemctl enable rpcbind --now
systemctl enable nfs --now
7)查看 RPC 服务的注册状况
rpcinfo -p localhost
8)showmount测试
Master节点测试
showmount -e masterIP
Slave节点测试
showmount -e SlaveIP
二、安装部署keepalived(Master和Slave两机器同样操作)
1)安装keepalived
yum -y install keepalived
2)Master节点的keepalived.conf配置(设置keepalived为非抢占模式,如果设置成抢占模式会在不断的切换主备时容易造成NFS数据丢失。)
# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf_bak
# >/etc/keepalived/keepalived.conf
# vim /etc/keepalived/keepalived.conf
global_defs {
router_id nfs
}
vrrp_script chk_nfs {
script "/etc/keepalived/nfs_check.sh"
interval 2
weight -20
}
vrrp_instance VI_1 {
state BACKUP
interface ens192
virtual_router_id 61
priority 100
advert_int 1
nopreempt
authentication {
auth_type PASS
auth_pass 123456
}
track_script {
chk_nfs
}
virtual_ipaddress {
10.90.12.30/24
}
}
3)Slave节点的keepalived.conf配置
global_defs {
router_id nfs
}
vrrp_script chk_nfs {
script "/etc/keepalived/nfs_check.sh"
interval 2
weight -20
}
vrrp_instance VI_1 {
state BACKUP
interface ens192
virtual_router_id 61
priority 100
advert_int 1
nopreempt
authentication {
auth_type PASS
auth_pass 123456
}
track_script {
chk_nfs
}
virtual_ipaddress {
10.90.12.30/24
}
}
4)编辑nfs_check.sh监控脚本
# vi /etc/keepalived/nfs_check.sh
#!/bin/bash
for i in `seq 14`;do
counter=`ps -aux | grep '\[nfsd\]' | wc -l`
KEEP = `ps -ef | grep keepalived | wc -l`
if [ $counter -eq 0 ];then
sudo systemctl restart nfs
fi
sleep 2
counter=`ps -aux | grep '\[nfsd\]' | wc -l`
if [ $counter -eq 0 ];then
systemctl stop keepalived.service
else
if [ $KEEP -eq 0 ]; then
systemctl start keepalived
fi
fi
sleep 2
done
设置脚本执行权限
# chmod 755 /etc/keepalived/nfs_check.sh
5)启动keepalived服务
# systemctl enable keepalived.service --now
查看服务进程是否启动
# ps -ef|grep keepalived
三、安装部署Rsync+Inofity(Master和Slave两机器都要操作)
1)安装rsync和inotify
# yum -y install rsync inotify-tools
2)Master节点机器配置rsyncd.conf
# cp /etc/rsyncd.conf /etc/rsyncd.conf_bak
# >/etc/rsyncd.conf
# vim /etc/rsyncd.conf
uid = root
gid = root
use chroot = no
port = 873
hosts allow = 10.90.12.0/24
max connections = 0
timeout = 300
pid file = /var/run/rsyncd.pid
lock file = /var/run/rsyncd.lock
log file = /var/log/rsyncd.log
log format = %t %a %m %f %b
transfer logging = yes
syslog facility = local3
[master_nfs]
path = /data/k8s_storage
comment = master_nfs
ignore errors
read only = no
list = no
auth users = nfs
secrets file = /opt/rsync_salve.pass
编辑密码和用户文件(格式为"用户名:密码")
# vim /opt/rsync_salve.pass
nfs:nfs123
编辑同步密码
该文件内容只需要填写从服务器的密码,例如这里从服务器配的用户名密码都是nfs:nfs123,则主服务器同步密码写nfs123一个就可以了
vi /opt/rsync.pass
设置文件执行权限
# chmod 600 /opt/rsync_salve.pass
# chmod 600 /opt/rsync.pass
启动服务
# systemctl enable rsyncd --now
3)Slave节点机器配置rsyncd.conf
就把master主机/etc/rsyncd.conf配置文件里的[master_nfs]改成[slave_nfs]
其他都一样,密码文件也设为一样
4)手动验证下Master节点NFS数据同步到Slave节点
在Master节点的NFS共享目录下创建测试数据
# mkdir /data/k8s_storage/test
# touch /data/k8s_storage/{a,b}
手动同步Master节点的NFS共享目录数据到Slave节点的NFS共享目录下
# rsync -avzp --delete /data/k8s_storage/ nfs@slaveIP::slave_nfs --password-file=/opt/rsync.pass
到Slave节点查看是否同步
# ls /data/k8s_storage/
上面rsync同步命令说明:
- /data/k8s_storage/ 是同步的NFS共享目录
- nfs@slaveIP::slave_nfs
- nfs 是Slave节点服务器的/opt/rsync_salve.pass文件中配置的用户名
- slaveIP为Slave节点服务ip
- slave_nfs 为Slave服务器的rsyncd.conf中配置的同步模块名
- –password-file=/opt/rsync.pass 是Master节点同步到Slave节点使用的密码文件,文件中配置的是Slave节点服务器的/opt/rsync.pass文件中配置的密码
四、设置Rsync+Inotify自动同步
1. 配置Inotify自动同步
1) 编写自动同步脚本
# vi /usr/local/nfs_rsync/rsync_inotify.sh
#!/bin/bash
host=slaveIP #修改为slave真实IP
src=/data/k8s_storage/
des=slave_nfs
password=/opt/rsync.pass
user=nfs
inotifywait=/usr/bin/inotifywait
$inotifywait -mrq --timefmt '%Y%m%d %H:%M' --format '%T %w%f%e' -e modify,delete,create,attrib $src \
| while read files ;do
rsync -avzP --delete --timeout=100 --password-file=${password} $src $user@$host::$des
echo "${files} was rsynced" >>/tmp/rsync.log 2>&1
done
2) 配置systemctl管理自动同步脚本
- 编写systemctl启动脚本
# vi /usr/local/nfs_rsync/rsync_inotify_start.sh
#!/bin/bash
nohub=/usr/bin/nohup
nohup sh /usr/local/nfs_rsync/rsync_inotify.sh >> /var/log/rsynch.log 2>&1 &
- 编写systemctl停止脚本
# vi /usr/local/nfs_rsync/rsync_inotify_stop.sh
#!/bin/bash
for i in `ps -ef | grep rsync_inotify.sh | awk '{print $2}'`
do
kill -9 $i
done
- 编写systemctl管理程序
#vi /usr/lib/systemd/system/nfs_rsync.service
[Unit]
Description=rsync_inotify service
Documentation=This is a Minio Service.
[Service]
Type=forking
TimeoutStartSec=10
WorkingDirectory=/usr/local/nfs_rsync
User=root
Group=root
Restart=on-failure
RestartSec=15s
ExecStart=/usr/local/nfs_rsync/rsync_inotify_start.sh
ExecStop=/usr/local/nfs_rsync/rsync_inotify_stop.sh
[Install]
WantedBy=multi-user.target
3) slave节点自动同步脚本(其他配置与上面保持一致)
# vi /usr/local/nfs_rsync/rsync_inotify.sh
#!/bin/bash
host=MsterIP #修改为master真实IP
src=/data/k8s_storage/
des=master_nfs
password=/opt/rsync.pass
user=nfs
inotifywait=/usr/bin/inotifywait
$inotifywait -mrq --timefmt '%Y%m%d %H:%M' --format '%T %w%f%e' -e modify,delete,create,attrib $src \
| while read files ;do
rsync -avzP --delete --timeout=100 --password-file=${password} $src $user@$host::$des
echo "${files} was rsynced" >>/tmp/rsync.log 2>&1
done
4) 启动Inotify程序
systemctl enable nfs_rsync.service --now
2. 配置keepalived VIP监控(master与slave同样配置)
1)编写VIP监控脚本
# vi /usr/local/vip_monitor/vip_monitor.sh
#!/bin/bash
while :
do
VIP_NUM=`ip addr|grep 10.90.12.30|wc -l`
RSYNC_INOTIRY_NUM=`ps -ef|grep /usr/bin/inotifywait|grep -v grep|wc -l`
if [ ${VIP_NUM} == 0 ];then
echo "VIP不在当前NFS节点服务器上" > /tmp/1.log
if [ ${RSYNC_INOTIRY_NUM} != 0 ];then
systemctl stop nfs_rsync.service
fi
else
echo "VIP在当前NFS节点服务器上" >/dev/null 2>&1
systemctl start nfs_rsync.service
fi
sleep 20
done
2) 配置systemctl管理VIP监控脚本
- 编写systemctl启动脚本
# vi /usr/local/vip_monitor/vip_monitor_start.sh
#!/bin/bash
nohub=/usr/bin/nohup
nohup sh /usr/local/vip_monitor/vip_monitor.sh >> /var/log/vip_monitor.log 2>&1 &
- 编写systemctl停止脚本
# vi /usr/local/vip_monitor/vip_monitor_stop.sh
#!/bin/bash
ps -ef | grep vip_monitor.sh | grep -v "grep" | awk '{print $2}' | xargs kill -9
- 编写systemctl管理程序
# vi /usr/lib/systemd/system/vip_monitor.service
[Unit]
Description=rsync_inotify service
Documentation=This is a Minio Service.
[Service]
Type=forking
TimeoutStartSec=10
WorkingDirectory=/usr/local/vip_monitor
User=root
Group=root
Restart=on-failure
RestartSec=15s
ExecStart=/usr/local/vip_monitor/vip_monitor_start.sh
ExecStop=/usr/local/vip_monitor/vip_monitor_stop.sh
[Install]
WantedBy=multi-user.target
3) 启动Inotify程序
systemctl enable vip_monitor.service --now
更多推荐
已为社区贡献1条内容
所有评论(0)