一、HADOOP安装配置步骤
1.1、服务器的准备

四台Redhat服务器

192.168.130.170 master
192.168.130.168 dd1
192.168.130.162 dd2
192.168.130.248 dd3
1.2、安装和配置JDK环境
安装JDK1.6,并在/etc/profile设置好环境变量
具体步骤:
1.2.1、下载地址:
http://www.oracle.com/technetwork/java/javase/downloads/index.html (可选择自己想要的版本,我这里安装的是JDK1.6)
安装完之后通过执行java -version验证是否安装成功
[hadoop@master conf]$ java -version
java version “1.6.0_37″
Java(TM) SE Runtime Environment (build 1.6.0_37-b06)
Java HotSpot(TM) 64-Bit Server VM (build 20.12-b01, mixed mode)
1.2.2、设置环境变量
vi /etc/profile
export JAVA_HOME=/usr/java/jdk1.6.0_26
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$PATH:$JAVA_HOME/bin
export JAVA_HOME CLASSPATH PATH
1.2.3、编辑完/etc/profile文件后,保存好之后还需输入下面这个命令使其马上生效
source /etc/profile
1.3、软件准备
HADOOP 1.0.4下载地址:http://mirrors.cnnic.cn/apache/hadoop/common/hadoop-1.0.4/hadoop-1.0.4.tar.gz
ZOOKEEPER 3.4.5下载地址:http://apache.dataguru.cn/zookeeper/zookeeper-3.4.5/zookeeper-3.4.5.tar.gz
HBASE 0.94.5 下载地址:http://mirrors.tuna.tsinghua.edu.cn/apache/hbase/hbase-0.94.5/hbase-0.94.5.tar.gz
1.4 给四台服务器创建hadoop用户及组,配置ssh无密码访问
1.4.1、创建用户及用户组:
groupadd -g 2000 hadoop
useradd -u 2000 -g hadoop hadoop
passwd hadoop
连续输入两次密码,我这里设置的是hadoop
1.4.2、生成密钥,输入如下命令
ssh-keygen -t rsa 连续回车三次
1.4.3、把公钥加到认证的公钥文件中,命令如下
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
1.4.4、授权authorized_keys文件的权限为644,具体命令和操作如下:
chmod 644 ~/.ssh/authorized_keys(确保所有服务器的authorized_keys权限为644)
1.4.5、接着验证ssh免密码登陆是否配置成功,命令如下
ssh localhost
第一次登陆时会询问你是否继续链接,输入yes即可
1.4.6、各服务器之间ssh无密码访问设置
使用scp命令,将当前服务器上的密匙添加到其他服务器上
scp ~/.ssh/id_rsa.pub hadoop@dd1:~/
scp ~/.ssh/id_rsa.pub hadoop@dd2:~/
scp ~/.ssh/id_rsa.pub hadoop@dd3:~/
回车会提示你是否继续连接,输入yes,再输入hadoop用户的密码hadoop
然后在各服务器上执行以下命令
cat id_rsa.pub >> ~/.ssh/authorized_keys
rm id_rsa.pub(添加后删除,以免其他服务器拷贝过来的时候冲突或者造成不必要的错误)

其他服务器之间以此类推
1.4.7、验证各服务器是否能正常ssh通讯
ssh dd1(第一次会提示是否继续连接,输入yes即可),直接可以登录到对应的节点上说明成功。
1.5、编辑/etc/hosts文件
在四台服务器的hosts文件中添加各服务器的主机名。

192.168.130.170 master
192.168.130.168 dd1
192.168.130.162 dd2
192.168.130.248 dd3
1.6、配置hadoop
使用hadoop用户通过FTP将hadoop、zookeeper、hbase的gz包拷贝到~/目录下
1.6.1、解压hadoop安装包
tar -zxvf hadoop-1.0.4.tar.gz
mv hadoop-1.0.4 hadoop
1.6.2配置hadoop配置文件
(1)配置conf/hadoop-env.sh文件
export JAVA_HOME=/usr/java/jdk1.6.0_26
(2)配置conf/core-site.xml文件
<configuration>
	<property>
		<name>fs.default.name</name>
		<value>hdfs://master:9000</value>
	</property>
	<property>
		<name>hadoop.tmp.dir</name>
		<value>/data/zqhadoop/data</value>
	</property>
</configuration>
(3)配置conf/hdfs-site.xml文件
<configuration>
	<property>
		<name>dfs.replication</name>
		<value>3</value>
	</property>
	<property>
		<name>dfs.name.dir</name>
		<value>/home/zqhadoop/HDFS/Namenode</value>
	</property>
	<property>
		<name>dfs.data.dir</name>
		<value>/home/zqhadoop/HDFS/Datanode</value>
	</property>
	<property>
		<name>dfs.permissions</name>
		<value>false</value>
	</property>
	<property>
		<name>dfs.datanode.max.xcievers</name>
		<value>4096</value>
	</property>
</configuration>
(4)配置conf/mapred-site.xml文件
<configuration>
	<property>
		<name>mapred.job.tracker</name>
		<value>master:9001</value>
	</property>
</configuration>
(5)配置masters和slaves文件
masters:
master
slaves:
dd1
dd2
dd3
1.6.3、使用scp命令将配置好的hadoop拷贝到其他服务器上
scp -r ~/hadoop hadoop@dd1:~/
scp -r ~/hadoop hadoop@dd2:~/
scp -r ~/hadoop hadoop@dd3:~/
1.6.4、在各服务器的/etc/profile添加hadoop的环境变量
export HADOOP_HOME=/home/hadoop/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
1.6.5、格式化hadoop namenode
进入hadoop目录,执行以下命令
bin/hadoop namenode -format
1.6.6、启动hadoop集群
bin/start-all.sh
二、验证hadoop集群
2.1、使用java自带的jps查看守护进程运行情况

maseter:
[zqhadoop@master bin]$ jps
763 SecondaryNameNode
866 JobTracker
8526 Jps
554 NameNode
slave:
[zqhadoop@dd1 ~]$ jps
19422 Jps
14051 DataNode
14159 TaskTracker
2.2、输入hadoop dfsadmin -report命令,可以查看hadoop群集的概况
[zqhadoop@master bin]$ ./hadoop dfsadmin -report
Warning: $HADOOP_HOME is deprecated.

Configured Capacity: 95862853632 (89.28 GB)
Present Capacity: 83737403392 (77.99 GB)
DFS Remaining: 83736358912 (77.99 GB)
DFS Used: 1044480 (1020 KB)
DFS Used%: 0%
Under replicated blocks: 1
Blocks with corrupt replicas: 0
Missing blocks: 0

————————————————-
Datanodes available: 3 (3 total, 0 dead)

Name: 192.168.130.248:50010
Decommission Status : Normal
Configured Capacity: 31954284544 (29.76 GB)
DFS Used: 348160 (340 KB)
Non DFS Used: 3638841344 (3.39 GB)
DFS Remaining: 28315095040(26.37 GB)
DFS Used%: 0%
DFS Remaining%: 88.61%
Last contact: Tue Feb 19 10:42:48 CST 2013

Name: 192.168.130.168:50010
Decommission Status : Normal
Configured Capacity: 31954284544 (29.76 GB)
DFS Used: 348160 (340 KB)
Non DFS Used: 3718184960 (3.46 GB)
DFS Remaining: 28235751424(26.3 GB)
DFS Used%: 0%
DFS Remaining%: 88.36%
Last contact: Tue Feb 19 10:42:49 CST 2013

Name: 192.168.130.162:50010
Decommission Status : Normal
Configured Capacity: 31954284544 (29.76 GB)
DFS Used: 348160 (340 KB)
Non DFS Used: 4768423936 (4.44 GB)
DFS Remaining: 27185512448(25.32 GB)
DFS Used%: 0%
DFS Remaining%: 85.08%
Last contact: Tue Feb 19 10:42:48 CST 2013
2.3、使用web访问验证hadoop
通过访问web界面访问hadoop的群集概况和数据节点概况, 可以通过以下两种方式进行访问, 其中master-IP是指命名主机的IP地址。

https://master-IP:50030 (查看命名空间情况)

https://master-IP:50070 (查看整个分布式文件系统的状态,浏览分布式文件系统中的文件以及 log)



三、zookeeper安装配置步骤:
3.1、解压zookeeper压缩包
tar -zxvf zookeeper-3.4.5.tar.gz
mv zookeeper-3.4.5 zookeeper
3.2、配置zoo.cfg文件
进入zookeeper/conf目录,执行以下命令
cp zoo_sample.cfg zoo.cfg
修改zoo.cfg中的配置如下:

dataDir=/home/zqhadoop/zookeeper_data
dataLogDir=/home/zqhadoop/zookeeper_log
clientPort=2181
initLimit=10
syncLimit=5
tickTime=2000
server.1=192.168.130.170:2888:3888
server.2=192.168.130.168:2888:3888
server.3=192.168.130.162:2888:3888
server.4=192.168.130.248:2888:3888
3.3、创建dataDir目录(这里指的是“/home/zqhadoop/zookeeper_data”),且在该目录下创建名为myid的文件
mkdir /home/hadoop/zookeeper_data
cd /home/hadoop/zookeeper_data
3.4、编辑“myid”文件,并在对应的IP的机器上输入对应的编号。如在192.168.130.170上,
“myid”文件内容就是1,在192.168.130.168上,内容就是2。与上面的zoo.cfg配置对应
3.5、将配置好的zookeeper通过scp备份到对应的服务器上
scp -r zookeeper hadoop@dd1:~/
scp -r zookeeper hadoop@dd2:~/
scp -r zookeeper hadoop@dd3:~/

scp -r zookeeper_data hadoop@dd1:~/
scp -r zookeeper_data hadoop@dd2:~/
scp -r zookeeper_data hadoop@dd3:~/
依照zoo.cfg修改zookeeper_data 目录中myid的值
3.5、启动zookeeper
单独启动的zookeeper需要分别在每台服务器上执行以下命令
~/home/hadoop/zookeeper/bin/zkServer.sh start
四、验证zookeeper安装
4.1、使用jps检查服务是否启动

[hadoop@master conf]$ jps
763 SecondaryNameNode
866 JobTracker
9203 Jps
1382 QuorumPeerMain
554 NameNode
可以看到jps会多出这样一个进程QuorumPeerMain
4.2、通过输入“sh /jz/zookeeper-3.3.1/bin/zkServer.sh status”检查是否启动
[zqhadoop@master bin]$ ./zkServer.sh status
JMX enabled by default
Using config: /home/zqhadoop/zookeeper/bin/../conf/zoo.cfg
Mode: follower
五、HBASE安装配置
5.1、解压hbase安装包

tar -zxvf hbase-0.94.5.tar.gz
mv hbase-0.94.5 hbase
5.2、配置hbase配置文件
(1)设置conf/hbase-site.xml文件

<configuration>
	<property>
		<name>hbase.rootdir</name>
		<value>hdfs://master:9000/hbase</value>
	</property>
	<property>
		<name>hbase.cluster.distributed</name>
		<value>true</value>
	</property>
	<property>
		<name>hbase.zookeeper.quorum</name>
		<value>dd1,dd2,dd3</value>
	</property>
	<property>
		<name>hbase.zookeeper.session.timeout</name>
		<value>60000</value>
	</property>
	<property>
		<name>hbase.zookeeper.property.clientPort</name>
		<value>2181</value>
	</property>
	<property>
		<name>hbase.master</name>
		<value>master</value>
	</property>
	<property>
		<name>hbase.regionserver.lease.period</name>
		<value>60000</value>
	</property>
	<property>
		<name>hbase.rpc.timeout</name>
		<value>60000</value>
	</property>
	<property>
		<name>hbase.master.maxclockskew</name>
		<value>180000</value>
	</property>
</configuration>

  • hbase.rootdir 设置hbase在hdfs上的目录,主机名为hdfs的namenode节点所在的主机
  • hbase.cluster.distributed 设置为true,表明是完全分布式的hbase集群
  • hbase.zookeeper.quorum 设置zookeeper的主机,建议使用单数
  • hbase.zookeeper.property.dataDir 设置zookeeper的数据路径
  • hbase.master.maxclockskew  设置各服务器之间的最大时间间隔
  • hbase.zookeeper.session.timeout 设置调用zookeeper session的超时时间
  • hbase.rpc.timeout 设置调用hbase rpc的超时时间
  • hbase.zookeeper.property.clientPort 设置zookeeper客户端的通讯端口
(2)配置conf/regionservers文件
dd1
dd2
dd3
(3)配置conf/hbase-env.xml文件
export JAVA_HOME=/usr/java/jdk1.6.0_37
export HBASE_MANAGES_ZK=flase
(4)配置hadoop中的hdfs-site.xml,添加配置
<property>
<span style="white-space:pre">	</span><name>dfs.datanode.max.xcievers</name>
<span style="white-space:pre">	</span><value>4096</value>
</property>
该参数限制了datanode所允许同时执行的发送和接受任务的数量,缺省为256,hadoop-defaults.xml中通常不设置这个参数。这个限制看来实际有些偏小,高负载下,DFSClient 在put数据的时候会报 could not read from stream 的 Exception。
An Hadoop HDFS datanode has an upper bound on the number of files that it will serve at any one time. The upper bound parameter is called xcievers (yes, this is misspelled).
Not having this configuration in place makes for strange looking failures. Eventually you’ll see a complain in the datanode logs complaining about the xcievers exceeded, but on the run up to this one manifestation is complaint about missing blocks. For example: 10/12/08 20:10:31 INFO hdfs.DFSClient: Could not obtain block blk_XXXXXXXXXXXXXXXXXXXXXX_YYYYYYYY from any node: java.io.IOException: No live nodes contain current block. Will get new block locations from namenode and retry…
5.3、删除hbase/lib包下面的hadoop-core-1.0.X.jar和commons-collections-3.2.1.jar,将hadoop目录下的core包和commons-collections-3.2.1.jar拷贝到hbase的lib下面 —这个操作是为了保证hadoop和hbase的包的版本一致,以为出现一些类似版本不兼容的问题
5.4、查看修改/etc/hosts是否绑定了127.0.0.1,如果有将其注释
下面是官网的说明
Before we proceed, make sure you are good on the below loopback prerequisite.
Loopback IP
HBase expects the loopback IP address to be 127.0.0.1. Ubuntu and some other distributions, for example, will default to 127.0.1.1 and this will cause problems for you.
/etc/hosts should look something like this:
127.0.0.1 localhost
127.0.0.1 ubuntu.ubuntu-domain ubuntu
这里说了hbase默认的会送地址是127.0.0.1,配置应该像上面这样,他这里添加了 127.0.0.1 localhost 这段。其实也可以直接不添加,把下面的这句话注释掉。问题就解决了!

错误记录:
情况1:
2012-12-19 22:11:46,018 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section ‘Client’ could not
be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2012-12-19 22:11:46,024 WARN org.apache.zookeeper.ClientCnxn: Session 0×0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
解决方法:
修改/etc/hosts
注释掉加粗的一行

<strong>#127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 </strong>
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.130.170 master
192.168.130.168 dd1
192.168.130.162 dd2
192.168.130.248 dd3
192.168.130.164 dd4
六、验证hbase运行
6.1、使用jps查看进程运行情况

masters:
[zqhadoop@master hbase]$ jps
763 SecondaryNameNode
9326 Jps
866 JobTracker
3207 HMaster
1382 QuorumPeerMain
554 NameNode
slaves:
[zqhadoop@dd1 ~]$ jps
15065 HRegionServer
14051 DataNode
14159 TaskTracker
14447 QuorumPeerMain
20157 Jps
6.2、执行hbase shell看是否连接正常
[zqhadoop@master bin]$ ./hbase shell
HBase Shell; enter ‘help’ for list of supported commands.
Type “exit” to leave the HBase Shell
Version 0.94.5, r1443843, Fri Feb 8 05:51:25 UTC 2013

hbase(main):001:0>
6.3、通过web验证hbase是否正常运行

Logo

权威|前沿|技术|干货|国内首个API全生命周期开发者社区

更多推荐