Hadoop 3.3 Spark 3.2 M1 MacOS 伪分布式配置
Hadoop 3.3 + Spark 3.2 + M1 MacOS伪分布式配置安装Java 8一开始装了Java 17发现Hadoop没法开启yarn,只好降低版本,适用于M1的Java 8下载:Java Download | Java 8, Java 11, Java 13 - Linux, Windows & macOS (azul.com)Java多版本管理打开~/.zshrc输入e
Hadoop 3.3 + Spark 3.2 + M1 MacOS伪分布式配置
安装Java 8
一开始装了Java 17发现Hadoop没法开启yarn,只好降低版本,适用于M1的Java 8下载:Java Download | Java 8, Java 11, Java 13 - Linux, Windows & macOS (azul.com)
Java多版本管理
打开~/.zshrc输入
export JAVA_17_HOME="/Library/Java/JavaVirtualMachines/jdk-17.0.2.jdk/Contents/Home"
alias java17='export JAVA_HOME=$JAVA_17_HOME'
export JAVA_11_HOME="/Library/Java/JavaVirtualMachines/zulu-11.jdk/Contents/Home"
alias java11='export JAVA_HOME=$JAVA_11_HOME'
export JAVA_8_HOME="/Library/Java/JavaVirtualMachines/zulu-8.jdk/Contents/Home"
alias java8='export JAVA_HOME=$JAVA_8_HOME'
# 默认使用java17
export JAVA_HOME=$JAVA_17_HOME
执行命令source ~/.zshrc,日常使用
java8 # 切换到java8
java -version # 查看当前java版本
Hadoop 3伪分布式
下载Apache Hadoop中aarch64版本,解压到~/opt(或者/usr/local),修改hadoop-3.3.2/etc/hadoop/hadoop-env.sh为
export JAVA_HOME=/Library/Java/JavaVirtualMachines/zulu-8.jdk/Contents/Home
接下来参考官网给出的伪分布式配置流程Apache Hadoop 3.3.2 – Hadoop: Setting up a Single Node Cluster.
-
设置免密登录ssh,在Mac的系统偏好设置->共享中打开远程登录选项

ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys chmod 0600 ~/.ssh/authorized_keys ssh localhost # 检查现在是否需要密码 -
配置
hadoop-3.3.2/etc/hadoop/core-site.xml<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://localhost:9000</value> </property> </configuration> -
配置
hadoop-3.3.2/etc/hadoop/hdfs-site.xml<configuration> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration> -
配置
hadoop-3.3.2/etc/hadoop/mapred-site.xml<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.application.classpath</name> <value>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*</value> </property> </configuration> -
配置
hadoop-3.3.2/etc/hadoop/yarn-site.xml:<configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.env-whitelist</name> <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_HOME,PATH,LANG,TZ,HADOOP_MAPRED_HOME</value> </property> </configuration> -
配置sbin/start-dfs.sh和sbin/top-dfs.sh
HDFS_DATANODE_USER=root
HADOOP_SECURE_DN_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root
- 配置sbin/start-yarn.sh和sbin/top-yarn.sh
YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=root
-
命令行启动
bin/hdfs namenode -format # 初始化 # 启动NameNode和DataNode # 启动后可以通过 http://localhost:9870/访问NameNode sbin/start-dfs.sh # 生成默认目录 bin/hdfs dfs -mkdir /user/<username> # 启动ResourceManager和NodeManager # 启动后可以通过 http://localhost:8088/访问ResourceManager sbin/start-yarn.sh -
检查是否全部启动:命令行输入
jps查看是否有NameNode、DataNode、ResourceManager和NodeManager
如果启动失败,查看目录hadoop-3.3.2/logs下的对应日志。
Spark 3
-
安装Scala 2.12,命令行输入
curl -fL https://github.com/coursier/launchers/raw/master/cs-x86_64-apple-darwin.gz | gzip -d > cs && chmod +x cs && (xattr -d com.apple.quarantine cs || true) ./cs install scala:2.12.15 scalac:2.12.15 -
下载地址Downloads | Apache Spark,注意spark不同版本对hadoop和scala版本有不同要求,这里安装3.2.1。同样解压到
~/opt(或者/usr/local),spark-3.2.1/conf/spark-env.sh.template重命名为spark-3.2.1/conf/spark-env.sh,修改spark-3.2.1/conf/spark-env.sh为export JAVA_HOME=/Library/Java/JavaVirtualMachines/zulu-8.jdk/Contents/Home export SCALA_HOME=/Users/<username>/opt/spark-3.2.1 export HADOOP_HOME=/Users/<username>/opt/hadoop-3.3.2 export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop -
命令行输入
./spark-3.2.1/sbin/start-all.sh启动,启动有问题同样查看logs目录下的日志。
更多推荐



所有评论(0)