hadoop hbase spark集群启动和停止服务顺序
hadoophbase spark集群启动和停止服务顺序集群 环境master 192.168.145.180slave1192.168.145.181slave2192.168.145.182各服务分布情况:master,all,namenode,datanode,zookeeper,resourcemanager,nodemanager,master,worker,hm...
hadoop hbase spark集群启动和停止服务顺序
集群 环境
master 192.168.145.180
slave1 192.168.145.181
slave2 192.168.145.182
各服务分布情况:
master,all,namenode,datanode,zookeeper,resourcemanager,nodemanager,master,worker,hmaster,hregionserver
slave1,all,slave,namenode,datanode,zookeeper,resourcemanager,nodemanager,master,worker,hmaster,hregionserver
slave2,all,slave,datanode,zookeeper,nodemanager,worker,,hregionserver
#启动Zookeeper集群
master>runRemoteCmd.sh “zkServer.sh start” zookeeper
也可以分别在三台机器上启动
master>zkServer.sh start
slave1>zkServer.sh start
slave2>zkServer.sh start
启动hadoop集群
master>start-dfs.sh
启动yarn集群
master>start-yarn.sh
还需要在slave机器上单独启动resourcemanager服务
ssh slave1
yarn-daemon.sh start resourcemanager
ssh master
启动hbase集群
master>start-hbase.sh
或者分别启动hbase集群
runRemoteCmd.sh “hbase-daemon.sh start regionserver” all
runRemoteCmd.sh “hbase-daemon.sh start master” master
启动spark集群
启动master节点:sbin/start-master.sh
启动worker节点:sbin/start-slaves.sh
或者:sbin/start-all.sh
具体操作:
master>runRemoteCmd.sh “start-master.sh” master
master>start-slaves.sh
jps查看机器服务情况:
[hadoop3@master ~]$ runRemoteCmd.sh "jps" all
*******************master***************************
3920 HMaster
4737 Master
2738 JournalNode
2275 QuorumPeerMain
2452 NameNode
4852 Worker
4054 HRegionServer
2891 DFSZKFailoverController
5181 Jps
2558 DataNode
3166 ResourceManager
3278 NodeManager
*******************slave1***************************
2419 DataNode
2596 DFSZKFailoverController
4758 Jps
2523 JournalNode
3515 HRegionServer
3595 HMaster
4267 Master
3085 ResourceManager
2254 QuorumPeerMain
2350 NameNode
2863 NodeManager
4367 Worker
*******************slave2***************************
2705 HRegionServer
2308 DataNode
3316 Jps
3143 Worker
2425 NodeManager
2218 QuorumPeerMain
[hadoop3@master ~]$
启动完毕可以查看web端管理页面
hadoop集群管理页面
http://192.168.145.180:50070/
http://192.168.145.181:50070
hadoop rpc连接地址
hdfs://master:8020
hdfs://slave1:8020
hadoop WebHDFS REST API
http://master:50070
http://slave1:50070
rest api如:
CopyToHDFS (in_local_file, host_name, port_number, user_name, in_remote_file, {append_file})
等等…
具体参看WebHDFS REST API的文档(网上自行搜索)
yarn集群管理页面
http://192.168.145.180:8088/cluster
http://192.168.145.181:8088/cluster
hbase集群管理页面
http://192.168.145.180:16010/master-status
http://192.168.145.181:16010/master-status
hbase zookeeper(spark和map/reduce)连接地址
val hbaseConf = HBaseConfiguration.create()
hbaseConf.set(“hbase.zookeeper.quorum”,“master,slave1,slave2”);
hbaseConf.set(“hbase.zookeeper.property.clientPort”,“2181”);
spark集群管理页面
http://192.168.145.180:18080/
http://192.168.145.181:18080/
spark rpc 连接地址
spark://master:7077
spark://slave1:7077
spark rest连接地址 (cluster mode)
spark://master:6066
spark://slave1:6066
停止集群服务的顺序
停止spark集群
master>spark/sbin/stop-slaves.sh
master>spark/sbin/stop-master.sh
停止hbase集群
master>stop-hbase.sh
停止yarn集群
master>stop-yarn.sh
停止hadoop集群
master>stop-dfs.sh
停止zookeeper集群
master>runRemoteCmd.sh “zkServer.sh stop” zookeeper
停止集群服务完毕!
–the–end–
更多推荐
所有评论(0)