一、Linux下jdk、hadoop、zookeeper、hbase、hive安装
一、jdk安装1.准备工作①.下载Hadoop-2.7.4和其对应的jdk版本jdk-8u161:链接:https://pan.baidu.com/s/1Z1kDg_tgakA-Hx5VsakPQg 提取码:rel3hadoop-2.7.4:链接:https://pan.baidu.com/s/1r5OwXFbuaByk45zADxfDNQ 提取码:nyt1②.为系统设置静态IP参考此文章:cen
一、jdk安装
1.准备工作
①.下载Hadoop-2.7.4和其对应的jdk版本
jdk-8u161:链接:https://pan.baidu.com/s/1Z1kDg_tgakA-Hx5VsakPQg 提取码:rel3
hadoop-2.7.4:链接:https://pan.baidu.com/s/1r5OwXFbuaByk45zADxfDNQ 提取码:nyt1
②.为系统设置静态IP
参考此文章:centos7如何设置静态IP_庸人自扰665的博客-CSDN博客_centos7配置静态ip
③.修改 /etc/hosts 文件
[root@hadoop01 ~]# vi /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.51.129 hadoop01
192.168.51.130 hadoop02
192.168.51.131 hadoop03
④.实现三台主机互信
[root@hadoop01 ~]# yum install openssh-server(三台主机上均进行安装)
[root@hadoop01 ~]# ssh-keygen -t rsa(连续按四次回车键)
[root@hadoop01 ~]# ssh-copy-id hadoop02
[root@hadoop01 ~]# ssh-copy-id hadoop03
[root@hadoop01 ~]# ssh-copy-id hadoop01
[root@hadoop01 ~]# scp -r /root/.ssh/ root@hadoop02:/root/.ssh
[root@hadoop01 ~]# scp -r /root/.ssh/ root@hadoop03:/root/.ssh
2.开始安装jdk
①.新建/export/servers和/export/software文件夹
[root@hadoop01 ~]#mkdir -p /export/servers
[root@hadoop01 ~]#mkdir -p /export/software
[root@hadoop01 ~]#cd /export/software
②.通过工具上传到/export/software文件下
winscp下载地址:链接:https://pan.baidu.com/s/1cRxySrA84WDJm0vJW97t9A 提取码:7ovz
③.解压文件到/export/servers文件夹下
[root@hadoop01 ~]# tar -zxvf jdk-8u161-linux-x64.tar.gz -C /export/servers
④.对解压的文件进行重命名
[root@hadoop01 ~]# cd /export/servers
[root@hadoop01 ~]#mv jdk1.8.0_161 jdk
⑤.在/etc/profile写入
[root@hadoop01 ~]# vi /etc/profile
# JAVA_HOME
export JAVA_HOME=/export/servers/jdk
export PATH=$PATH:$JAVA_HOME/bin
export CLASSPATH=.JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
使命令生效
[root@hadoop01 ~]# source /etc/profile
⑥.查看jdk
[root@hadoop01 ~]# java -version
若出现jdk的版本即安装成功
二.hadoop-2.7.4的安装
1.解压文件
[root@hadoop01 ~]# cd /export/software
[root@hadoop01 ~]# tar -zxvf hadoop-2.7.4.tar.gz -C /export/servers
2.在/etc/profile写入
[root@hadoop01 ~]# vi /etc/profile
# HADOOP_HOME
export HADOOP_HOME=/export/servers/hadoop-2.7.4
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
3.查看Hadoop是否安装成功
[root@hadoop01 ~]# hadoop version
若出现Hadoop相关版本的信息说明安装成功
4.修改Hadoop文件
①.修改hadoop-env.sh文件
[root@hadoop01 ~]# cd /export/servers/hadoop-2.7.4/etc/hadoop/
[root@hadoop01 hadoop]# vi hadoop-env.sh
# The java implementation to use.
export JAVA_HOME=/export/servers/jdk
②.修改yarn-env.sh文件
[root@hadoop01 hadoop]# vi yarn-env.sh
# some Java parameters
export JAVA_HOME=/export/servers/jdk
③.修改core-site.xml文件
[root@hadoop01 hadoop]# vi core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop01:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/export/servers/hadoop-2.7.4/tmp</value>
</property>
</configuration>
④。修改hdfs-site.xml文件
[root@hadoop01 hadoop]# vi hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hadoop02:50090</value>
</property>
</configuration>
⑤。修改mapred-site.xml文件
[root@hadoop01 hadoop]# cp mapred-site.xml.template mapred-site.xml
[root@hadoop01 hadoop]# vi mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
⑥.修改yarn-site.xml文件
[root@hadoop01 hadoop]# vi yarn-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop01</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
⑦.修改slaves文件
[root@hadoop01 hadoop]# vi slaves
hadoop01
hadoop02
hadoop03
5.将Hadoop01的配置分发给另外两台主机
[root@hadoop01 ~]# scp -r /etc/profile root@hadoop02:/etc/profile
[root@hadoop01 ~]# scp -r /etc/profile root@hadoop03:/etc/profile
在hadoop02和hadoop03分别执行以下语句:
[root@hadoop02 ~]# mkdir -p /export/servers
回到hadoop01:
[root@hadoop01 ~]# scp -r /export/servers root@hadoop02:/export/
[root@hadoop01 ~]# scp -r /export/servers root@hadoop03:/export/
在hadoop02和hadoop03中分别执行一下语句
[root@hadoop02 ~]# source /etc/profile
6.启动Hadoop集群
只在hadoop01上启动
[root@hadoop01 ~]# hdfs namenode -format
[root@hadoop01 ~]# start-all.sh
查看启动情况hadoop01
[root@hadoop01 ~]# jps
14066 Jps
2389 DataNode
2759 NodeManager
2253 NameNode
2638 ResourceManager
hadoop02:
[root@hadoop02 ~]# jps
1584 SecondaryNameNode
1685 NodeManager
1527 DataNode
6011 Jps
hadoop03:[root@hadoop03 ~]# jps
5016 Jps
1531 DataNode
1611 NodeManager
7.查看是否搭建完成(在hadoop01进行访问)
hadoop01:50070
hadoop02:8088
转到下一篇:
更多推荐
所有评论(0)