docker构建hadoop镜像并运行
单机版hadoop使用docker构建及运行一、环境:组件信息组件版本CentOS7.9.2009java1.8.0_161hadoop3.1.3docker20.10.8服务配置机器服务node1datanodenode1namenodenode1resourcemanagernode1nodemanagernode1secondrynamenode二、准备镜像使用最新版本的centOS.doc
单机版hadoop使用docker构建及运行
一、环境:
组件信息
组件 版本
CentOS 7.9.2009
java 1.8.0_161
hadoop 3.1.3
docker 20.10.8
服务配置
机器 服务
node1 datanode
node1 namenode
node1 resourcemanager
node1 nodemanager
node1 secondrynamenode
二、准备镜像
使用最新版本的centOS.
docker pull centos:latest
三、下载软件包
1.下载hadoop,此处使用的是3.1.3版本
2.下载jdk
四、启动容器
由于镜像中不包含wget,也没有预先安装sshd,传统的scp与http方式均无法传输,需要通过bind mount的方式启动镜像,来完成文件传输.
此处使用本机的/export/software目录
docker run -it --name hadoop -v /export/software:/usr/local/software centos:latest
五、安装jdk与hadoop
将软件包放置到/export/software,可以在容器/usr/local/software看到对应安装包
先做一个目录规划.
/usr/local/bigdata/jdk 作为jdk目录
/usr/local/bigdata/hadoop hadoop的目录 包含jar包 启动脚本 hadoop配置等
/usr/local/bigdata/logs 存放日志,方便查阅 这个后边用hadoop用户创建
解压软件包
## 创建目录并拷贝软件包
mkdir /usr/local/bigdata
cp /usr/local/software /usr/local/bigdata
cd /usr/local/bigdata
## 解压后重命名
tar -zxvf hadoop-3.1.3.tar.gz
tar -zxvf jdk-8u161-linux-x64.tar.gz
mv hadoop-3.1.3 hadoop
mv jdk1.8.0_281/ jdk
## 清理安装包 减小容器大小
rm -f hadoop-3.1.3.tar.gz
rm -f jdk-8u161-linux-x64.tar.gz
六、安装sshd
hadoop节点间通过ssh操作,默认镜像中并不包含sshd服务,因为需要安装.
yum update
一路Y回车.更新完yum后安装sshd
yum install -y openssl openssh-server
yum install openssh*
一路回车,创建密钥并启动ssh服务
ssh-keygen -t rsa
ssh-keygen -t dsa
ssh-keygen -t ecdsa
ssh-keygen -t ed25519
cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys
修改sshd的配置文件
vi /etc/ssh/sshd_config
修改部分为
### 原内容
HostKey /etc/ssh/ssh_host_rsa_key
HostKey /etc/ssh/ssh_host_ecdsa_key
HostKey /etc/ssh/ssh_host_ed25519_key
### 修改为
HostKey /root/.ssh/id_rsa
HostKey /root/.ssh/id_ecdsa
HostKey /root/.ssh/id_ed25519
HostKey /root/.ssh/id_dsa
允许远程登陆
vi /etc/pam.d/sshd
# 使用#注释掉此行
# account required pam_nologin.so
启动sshd服务并查看状态
/usr/sbin/sshd
ps -ef | grep sshd
启动成功
root 311 1 0 06:43 ? 00:00:00 /usr/sbin/sshd
root 332 1 0 06:44 pts/0 00:00:00 grep --color=auto sshd
七、安装net-tools
yum install net-tools
八、配置环境变量
接上步,root用户
vi ~/.bashrc
替换内容如下
# .bashrc
# Source global definitions
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
# User specific environment
if ! [[ "$PATH" =~ "$HOME/.local/bin:$HOME/bin:" ]]
then
PATH="$HOME/.local/bin:$HOME/bin:$PATH"
fi
export JAVA_HOME=/usr/local/bigdata/jdk
export CLASSPATH=$JAVA_HOME/lib
export PATH=$PATH:$JAVA_HOME/bin
# hadoop env
export HADOOP_HOME=/usr/local/bigdata/hadoop
export HADOOP_COMMON_HOME=$HADOOP_HOME
export PATH=$PATH:$HADOOP_HOME/bin
PATH=$PATH:$HOME/bin
export PATH
# Uncomment the following line if you don't like systemctl's auto-paging feature:
# export SYSTEMD_PAGER=
# User specific aliases and functions
:wq保存,更新环境变量
source ~/.bash_profile
接下来更新hadoop配置,首先修改core-site.xml.
cd /usr/local/bigdata/hadoop/etc/hadoop
vi core-site.xml
替换内容如下:
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:8020</value>
</property>
</configuration>
:wq保存,创建日志目录
mkdir /usr/local/bigdata/logs
接下来修改hadoop-env.sh,
rm -f hadoop-env.sh
vi hadoop-env.sh
替换内容如下:
export JAVA_HOME=${JAVA_HOME}
export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/etc/hadoop"}
for f in $HADOOP_HOME/contrib/capacity-scheduler/*.jar; do
if [ "$HADOOP_CLASSPATH" ]; then
export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$f
else
export HADOOP_CLASSPATH=$f
fi
done
export HADOOP_HEAPSIZE=1024
export HADOOP_NAMENODE_INIT_HEAPSIZE=1024
export HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true"
export HDFS_NAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_NAMENODE_OPTS"
export HDFS_DATANODE_OPTS="-Dhadoop.security.logger=ERROR,RFAS $HADOOP_DATANODE_OPTS"
export HDFS_SECONDARYNAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_SECONDARYNAMENODE_OPTS"
export HADOOP_NFS3_OPTS="$HADOOP_NFS3_OPTS"
export HADOOP_PORTMAP_OPTS="-Xmx1024m $HADOOP_PORTMAP_OPTS"
export HADOOP_CLIENT_OPTS="-Xmx1024m $HADOOP_CLIENT_OPTS"
export HADOOP_SECURE_DN_USER=${HADOOP_SECURE_DN_USER}
export HADOOP_SECURE_LOG_DIR=${HADOOP_LOG_DIR}/${HADOOP_HDFS_USER}
export HADOOP_PID_DIR=${HADOOP_PID_DIR}
export HADOOP_SECURE_DN_PID_DIR=${HADOOP_PID_DIR}
export HADOOP_IDENT_STRING=hadoop
export HADOOP_LOG_DIR=/usr/local/bigdata/logs
vi hdfs-site.xml
替换内容如下
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.permissions.enabled</name>
<value>false</value>
</property>
<property>
<name>dfs.namenode.http-address</name>
<value>HOSTNAME:9870</value>
</property>
</configuration>
九、初始化namenode
执行下列命令初始化namenode
hdfs namenode -format
十、启动hadoop
cd /usr/local/bigdata/hadoop/sbin
1、在start-dfs.sh、stop-dfs.sh文件中的上面中添加启动用户
HDFS_DATANODE_USER=root
HADOOP_SECURE_DN_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root
HDFS_JOURNALNODE_USER=root
HDFS_ZKFC_USER=root
2、在start-yarn.sh、stop-yarn.sh文件的上面添加启动用户
YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=root
3、启动
cd /usr/local/bigdata/hadoop/sbin
./start-dfs.sh
./start-yarn.sh
jps命令查看,启动成功
1122 SecondaryNameNode
900 DataNode
1399 ResourceManager
1849 Jps
779 NameNode
1517 NodeManager
十一、停止hadoop配置启动脚本
1、停止程序
cd /usr/local/bigdata/hadoop/sbin
./stop-dfs.sh
./stop-yarn.sh
2、分别创建core-site.xml.template以及hdfs-site.xml.template
hdfs-site.xml.template
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.permissions.enabled</name>
<value>false</value>
</property>
<property>
<name>dfs.namenode.http-address</name>
<value>HOSTNAME:9870</value>
</property>
</configuration>
core-site.xml.template
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://HOSTNAME:9000</value>
</property>
</configuration>
3、创建启动脚本
vi /etc/bootstrap.sh
#!/bin/bash
source ~/.bash_profile
source /etc/profile
rm -rf /tmp/*
: ${HADOOP_PREFIX:=/usr/local/bigdata/hadoop}
$HADOOP_PREFIX/bin/hdfs namenode -format
$HADOOP_PREFIX/etc/hadoop/hadoop-env.sh
# installing libraries if any - (resource urls added comma separated to the ACP system variable)
cd $HADOOP_PREFIX/share/hadoop/common ; for cp in ${ACP//,/ }; do echo == $cp; curl -LO $cp ; done; cd -
# altering the core-site configuration
sed s/HOSTNAME/$HOSTNAME/ /usr/local/bigdata/hadoop/etc/hadoop/core-site.xml.template > /usr/local/bigdata/hadoop/etc/hadoop/core-site.xml
sed s/HOSTNAME/$HOSTNAME/ /usr/local/bigdata/hadoop/etc/hadoop/hdfs-site.xml.template > /usr/local/bigdata/hadoop/etc/hadoop/hdfs-site.xml
/usr/sbin/sshd
$HADOOP_PREFIX/sbin/start-dfs.sh
$HADOOP_PREFIX/sbin/start-yarn.sh
if [[ $1 == "-d" ]]; then
while true; do sleep 1000; done
fi
if [[ $1 == "-bash" ]]; then
/bin/bash
fi
十二、容器导出为镜像
到目前位置hadoop的安装配置启动就完成了,后面需要将这个容器导出为镜像,然后从这个镜像启动多个容器实例来搭建单机集群
docker export hadoop > hadoop.tar
导入成镜像
docker import hadoop.tar hadoop:3.1.3
十三、运行容器查看启动情况
1、运行
docker run --name hadoop3.1.3 -i -t -p 8020:8020 -p 9870:9870 -p 8088:8088 -p 8040:8040 -p 8042:8042 -p 49707:49707 -p 50010:50010 -p 50075:50075 -p 50090:50090 hadoop3.1.3 /etc/bootstrap.sh -bash
2、进入容器查看启动情况
docker exec -it hadoop3.1.3 bash
jps
1041 NodeManager
914 ResourceManager
644 SecondaryNameNode
1431 Jps
408 DataNode
269 NameNode
web端访问
宿主机ip:9870
宿主机ip:8088
更多推荐
所有评论(0)