hadoop-aws 和 aws-java-sdk 版本兼容 Spark 3.1.2
·
问题:hadoop-aws 和 aws-java-sdk 版本兼容 Spark 3.1.2
我遇到了版本兼容性问题,将使用hadoop-aws和aws-java-sdk-s3的 Spark 项目更新到带有 Scala 2.12.15 的 Spark 3.1.2 以便在 EMR 6.5.0 上运行。
我检查了EMR 发行说明说明这些版本:
-
适用于 Java 的 AWS 开发工具包 v1.12.31
-
Spark v3.1.2
-
Hadoop v3.2.1
我目前在本地运行spark,以确保上述版本的兼容性并得到以下错误:
java.lang.NoSuchFieldError: SERVICE_ID
at com.amazonaws.services.s3.AmazonS3Client.createRequest(AmazonS3Client.java:4925)
at com.amazonaws.services.s3.AmazonS3Client.createRequest(AmazonS3Client.java:4911)
at com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1441)
at com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:1381)
at org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$verifyBucketExists$1(S3AFileSystem.java:381)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:265)
at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:322)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:261)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:236)
at org.apache.hadoop.fs.s3a.S3AFileSystem.verifyBucketExists(S3AFileSystem.java:380)
at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:314)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3303)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3352)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3320)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365)
at org.apache.spark.sql.execution.streaming.FileStreamSink$.hasMetadata(FileStreamSink.scala:46)
我还尝试检查aws-java-sdk``hadoop-aws是基于的版本。Hadoop-aws3.2.1 依赖于aws-java-sdk1.11.375,因为它可以在这里找到
但是,这些版本会导致不同的错误:
'org.apache.http.client.methods.HttpRequestBase com.amazonaws.http.HttpResponse.getHttpRequest()'
at com.amazonaws.services.s3.internal.S3ObjectResponseHandler.handle(S3ObjectResponseHandler.java:57)
at com.amazonaws.services.s3.internal.S3ObjectResponseHandler.handle(S3ObjectResponseHandler.java:29)
at com.amazonaws.http.response.AwsResponseHandlerAdapter.handle(AwsResponseHandlerAdapter.java:70)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleResponse(AmazonHttpClient.java:1555)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1272)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1058)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4368)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4315)
at com.amazonaws.services.s3.AmazonS3Client.getObject(AmazonS3Client.java:1416)
at org.apache.hadoop.fs.s3a.S3AInputStream.lambda$reopen$0(S3AInputStream.java:196)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
at org.apache.hadoop.fs.s3a.S3AInputStream.reopen(S3AInputStream.java:195)
at org.apache.hadoop.fs.s3a.S3AInputStream.lambda$lazySeek$1(S3AInputStream.java:346)
at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$2(Invoker.java:195)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:265)
at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:322)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:261)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:193)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:215)
at org.apache.hadoop.fs.s3a.S3AInputStream.lazySeek(S3AInputStream.java:339)
at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:451)
at java.base/java.io.DataInputStream.read(DataInputStream.java:149)
build.sbt:
scalaVersion := "2.12.15"
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % "3.1.2",
"org.apache.spark" %% "spark-sql" % "3.1.2",
"com.fasterxml.jackson.core" % "jackson-databind" % "2.12.2",
"com.fasterxml.jackson.module" %% "jackson-module-scala" % "2.12.2",
"org.apache.hadoop" % "hadoop-client" % "3.2.1",
"org.apache.hadoop" % "hadoop-aws" % "3.2.1",
"com.amazonaws" % "aws-java-sdk-s3" % "1.11.375"
)
这些库的正确版本应该是什么?
解答
EMR 文档说“使用我们自己的 s3:连接器”...如果您在 EMR 上运行,请这样做。
您应该在其他安装(包括本地安装)上使用 s3a
那里
- mvnrepository一种了解依赖关系的好方法
* 这是hadoop-aws的摘要,尽管它的 3.2.1 声明遗漏了所有依赖项。它是 1.11.375
-
您看到的堆栈跟踪来自尝试使 aws s3 sdk、core sdk、jackson 和 httpclient 同步。
-
最容易放弃并使用完整的aws-java-sdk-bundle,它具有一组一致的 aws 工件和依赖项的私有版本。它很大——但消除了与传递依赖相关的所有问题
更多推荐
所有评论(0)