1.执行sql语句,报错信息。

hive> insert into table student values(1,'abc'); Query ID = atguigu_20200814150018_318272cf-ede4-420c-9f86-c5357b57aa11 Total jobs = 1 Launching Job 1 out of 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapreduce.job.reduces=<number> Job failed with java.lang.ClassNotFoundException: org.apache.spark.AccumulatorParam FAILED: Execution Error, return code 3 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Spark job failed during runtime. Please check stacktrace for the root cause. 

原因:由于当前的hive的版本3.1.2,spark版本3.0.0,只能自己编译。

建议用官方发布的hive+spark版本搭配。

安装和Spark对应版本一起编译的Hive,当前官网推荐的版本关系如下:

HiveVersionSparkVersion
1.1.x1.2.0
1.2.x1.3.1
2.0.x1.5.0
2.1.x1.6.0
2.2.x1.6.0
2.3.x2.0.0
3.0.x2.3.0
master2.3.0

若版本一致,还报该错误:

若配置的是HA,则:hive-site.xml是如下:

<property>
    <name>spark.yarn.jars</name>
    <value>hdfs://mycluster/spark-jars/*</value>
</property>

若不是以上原因:

则删掉hive,重新进行安装,也许是在hive还没解压完,你就进行了mv加压后的目录,导致jar包不全;

Logo

大数据从业者之家,一起探索大数据的无限可能!

更多推荐