Hadoop0.20.2+Hbase0.90.4+Zookeeper3.3.3集成以及遇到的问题
实验环境:vware 7.1centos5.5jdk1.6 假设你已经有可运行的hadoop,hadoop的配置参考如下(具体hadoop配置运行的教程可以网上找)core-site.xmlhadoop.tmp.dir/data/hadoo
实验环境:
vware 7.1
centos5.5
jdk1.6
假设你已经有可运行的hadoop,hadoop的配置参考如下(具体hadoop配置运行的教程可以网上找)
core-site.xml
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/data/hadoop</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://hadoop:9000</value>
</property>
</configuration>
hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>hadoop:9001</value>
</property>
</configuration>
由于Hbase需要zookeeper的配合,我们可以自己定义一组zookeeper的集群来协调Hbase的工作
zookeeper的配置如下
zoo.cfg配置文件
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper
clientPort=2181
通过bin/zkServer.sh start命令启动zookeeper
Hbase配置
hbase-env.sh文件
export JAVA_HOME=JDK地址
export HBASE_MANAGES_ZK=false
如果想让Hbase自己管理zookeeper,则HBASE_MANAGES_ZK就可以设置成true,这样就不用自己手动启动zookeeper,因为Hbase是集成了zookeeper的。
hbase-site.xml
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://hadoop:9000/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.master.port</name>
<value>60010</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>hadoop</value>
</property>
</configuration>
注意:hbase.rootdir下的端口号必须和hadoop配置文件中core-site.xml中的fs.default.name端口号一致,不然Hbase无法启动
将lib目录下hadoop-core-0.20-append-r1056497.jar删除,替换成hadoop0.20.2 下的hadoop-0.20.2-core.jar
通过命令 bin/start-hbase.sh启动Hbase,启动后就可以用bin/hbase shell启动Hbase的命令窗口,输入list就可以查询存在的表。
遇到的问题:
Hbase启动后用命令list时出现org.apache.hadoop.hbase.MasterNotRunningException: null
查看日志,错误信息如下:
Exception in thread "main" java.lang.RuntimeException: Failed
construction of Regionserver: class
org.apache.hadoop.hbase.regionserver.HRegionServer
at
org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionSer
ver(HRegionServer.java:2719)
at
org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start
(HRegionServerCommandLine.java:60)
at
org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run
(HRegionServerCommandLine.java:75)
at
org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at
org.apache.hadoop.hbase.util.ServerCommandLine.doMain
(ServerCommandLine.java:76)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.main
(HRegionServer.java:2742)
Caused by:
java.lang.reflect.InvocationTargetException
at
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance
(NativeConstructorAccessorImpl.java:39)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance
(DelegatingConstructorAccessorImpl.java:27)
at
java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionSer
ver(HRegionServer.java:2717)
... 5 more
Caused by:
java.net.BindException: Problem binding to /61.140.3.66:60020 :
Cannot assign requested address
at
org.apache.hadoop.hbase.ipc.HBaseServer.bind(HBaseServer.java:203)
at org.apache.hadoop.hbase.ipc.HBaseServer$Listener.<init>
(HBaseServer.java:270)
at
org.apache.hadoop.hbase.ipc.HBaseServer.<init>(HBaseServer.java:1168)
at org.apache.hadoop.hbase.ipc.HBaseRPC$Server.<init>
(HBaseRPC.java:544)
at
org.apache.hadoop.hbase.ipc.HBaseRPC.getServer(HBaseRPC.java:514)
at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>
(HRegionServer.java:331)
... 10 more
Caused by:
java.net.BindException: Cannot assign requested address
at
sun.nio.ch.Net.bind(Native Method)
at
sun.nio.ch.ServerSocketChannelImpl.bind
(ServerSocketChannelImpl.java:119)
at
sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
at org.apache.hadoop.hbase.ipc.HBaseServer.bind(HBaseServer.java:201)
... 15 more
奇怪了,我设置这台虚拟机的ip为192.168.32.5,为什么会解释成61.140.3.66呢?难到是自动转换成ip6了?于是把centos下的ipv6关掉就没问题了,关闭方法如下:
首先修改/etc/modprobe.conf文件,在此文件中加入:
alias net-pf-10 off
alias ipv6 off
重启系统后生效
更多推荐
所有评论(0)