hadoop重新format namenode的步骤
我的集群有三个节点master h3worker1 h4worker2 h5首先是每一台机器的zookeeper都已经启动了步骤1. 主节点 stop-all.sh2. 主节点删除所有的临时目录 log日志,包含在xml设定的还有默认的3.主节点 hdfs zkfc -formatZK4.主节点 hdfs --daemon start journalnod...
·
我的集群有三个节点
master h3
worker1 h4
worker2 h5
首先是每一台机器的zookeeper都已经启动了
步骤
1. 主节点 stop-all.sh
2. 主节点删除所有的临时目录 log日志,包含在xml设定的还有默认的
3.主节点 hdfs zkfc -formatZK
4.主节点 hdfs --daemon start journalnode
5. 到每一个子节点 hdfs --daemon stop datanode; 并删除字节点的临时目录和log日志,然后hdfs --daemon start journalnode;
6 主节点 start-dfs.sh
7. 主节点 hdfs namenode -format
8. 主节点 start-all.sh
9. 到每一个子节点 hdfs --daemon start datanode;
完成了
以下是脚本在master执行就可以完成以上动作,需要安装expect并开通ssh,防火墙要关闭
#!bin/bash
master="h3"
worker1="h4"
worker2="h5"
tempfile1="/tmp-hadoop/name";
tempfile2="/tmp-hadoop/data";
tempfile3="/tmp-hadoop/journaldata";
tempfile4="/opt/hadoop-3.1.1/logs";
tempfile5="/tmp/hadoop";
expectBin="/usr/local/bin/expect"
#查看防火墙 ,不要停止zookeeper
systemctl status firewalld
#重新加载环境变量
source /etc/profile
#停止hadoop
stop-all.sh
rm -rf $tempfile1/*
rm -rf $tempfile2/*
rm -rf $tempfile3/*
rm -rf $tempfile4/*
rm -rf $tempfile5/*
#format zk Proceed formatting /hadoop-ha/cluster? (Y or N)
$expectBin << !
spawn hdfs zkfc -formatZK
expect {
"*formatting*" { send "y\r";}
}
expect eof
!
hdfs --daemon start journalnode
echo "go worker1...."
ssh $worker1 "
/opt/hadoop-3.1.1/bin/hdfs --daemon stop datanode;
rm -rf /tmp-hadoop/name/*;
rm -rf /tmp-hadoop/data/*;
rm -rf /tmp-hadoop/journaldata/*;
rm -rf /opt/hadoop-3.1.1/logs/*;
rm -rf /tmp/hadoop/*;
/opt/hadoop-3.1.1/bin/hdfs --daemon start journalnode;
exit"
echo "go worker2...."
ssh $worker2 "
/opt/hadoop-3.1.1/bin/hdfs --daemon stop datanode;
rm -rf /tmp-hadoop/name/*;
rm -rf /tmp-hadoop/data/*;
rm -rf /tmp-hadoop/journaldata/*;
rm -rf /opt/hadoop-3.1.1/logs/*;
rm -rf /tmp/hadoop/*;
/opt/hadoop-3.1.1/bin/hdfs --daemon start journalnode;
exit"
echo "back to master...."
#必须先启动dfs
start-dfs.sh
hdfs namenode -format
start-all.sh
echo "go worker1 again...."
ssh $worker1 "
/opt/hadoop-3.1.1/bin/hdfs --daemon start datanode;
jps;
exit"
echo "go worker2 again...."
ssh $worker2 "
/opt/hadoop-3.1.1/bin/hdfs --daemon start datanode;
jps;
exit"
jps
完成后看是否节点都启动了
更多推荐
已为社区贡献1条内容
所有评论(0)