Answer a question

I ran into version compatibility issues updating Spark project utilising both hadoop-aws and aws-java-sdk-s3 to Spark 3.1.2 with Scala 2.12.15 in order to run on EMR 6.5.0.

I checked EMR release notes stating these versions:

  • AWS SDK for Java v1.12.31
  • Spark v3.1.2
  • Hadoop v3.2.1

I am currently running spark locally to ensure compatibility of above versions and get the following error:

 java.lang.NoSuchFieldError: SERVICE_ID
    at com.amazonaws.services.s3.AmazonS3Client.createRequest(AmazonS3Client.java:4925)
    at com.amazonaws.services.s3.AmazonS3Client.createRequest(AmazonS3Client.java:4911)
    at com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1441)
    at com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:1381)
    at org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$verifyBucketExists$1(S3AFileSystem.java:381)
    at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
    at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:265)
    at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:322)
    at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:261)
    at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:236)
    at org.apache.hadoop.fs.s3a.S3AFileSystem.verifyBucketExists(S3AFileSystem.java:380)
    at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:314)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3303)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3352)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3320)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365)
    at org.apache.spark.sql.execution.streaming.FileStreamSink$.hasMetadata(FileStreamSink.scala:46)

I also tried checking version of aws-java-sdk hadoop-aws is based on. Hadoop-aws 3.2.1 relies on aws-java-sdk 1.11.375 as it can be found here

However these versions result in a different error:

 'org.apache.http.client.methods.HttpRequestBase com.amazonaws.http.HttpResponse.getHttpRequest()'
    at com.amazonaws.services.s3.internal.S3ObjectResponseHandler.handle(S3ObjectResponseHandler.java:57)
    at com.amazonaws.services.s3.internal.S3ObjectResponseHandler.handle(S3ObjectResponseHandler.java:29)
    at com.amazonaws.http.response.AwsResponseHandlerAdapter.handle(AwsResponseHandlerAdapter.java:70)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleResponse(AmazonHttpClient.java:1555)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1272)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1058)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
    at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
    at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4368)
    at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4315)
    at com.amazonaws.services.s3.AmazonS3Client.getObject(AmazonS3Client.java:1416)
    at org.apache.hadoop.fs.s3a.S3AInputStream.lambda$reopen$0(S3AInputStream.java:196)
    at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
    at org.apache.hadoop.fs.s3a.S3AInputStream.reopen(S3AInputStream.java:195)
    at org.apache.hadoop.fs.s3a.S3AInputStream.lambda$lazySeek$1(S3AInputStream.java:346)
    at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$2(Invoker.java:195)
    at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
    at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:265)
    at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:322)
    at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:261)
    at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:193)
    at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:215)
    at org.apache.hadoop.fs.s3a.S3AInputStream.lazySeek(S3AInputStream.java:339)
    at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:451)
    at java.base/java.io.DataInputStream.read(DataInputStream.java:149)

build.sbt:

scalaVersion := "2.12.15"

libraryDependencies ++= Seq(
  "org.apache.spark" %% "spark-core" % "3.1.2",
  "org.apache.spark" %% "spark-sql"  % "3.1.2",
  "com.fasterxml.jackson.core"    % "jackson-databind"     % "2.12.2",
  "com.fasterxml.jackson.module" %% "jackson-module-scala" % "2.12.2",
  "org.apache.hadoop"             % "hadoop-client"        % "3.2.1",
  "org.apache.hadoop"             % "hadoop-aws"           % "3.2.1",
  "com.amazonaws"                 % "aws-java-sdk-s3"      % "1.11.375"
)

What should be correct versions for these libraries?

Answers

the EMR docs says "use our own s3: connector"...if you are running on EMR do exactly that.

you should use the s3a one on other installations, including local ones

And there

  • mvnrepository a good way to get a view of what dependencies are
    * here is its summary for hadoop-aws though its 3.2.1 declaration misses out all the dependencies. it is 1.11.375
  • the stack traces you are seeing are from trying to get the aws s3 sdk, core sdk, jackson and httpclient in sync.
  • it's easiest to give up and just go with the full aws-java-sdk-bundle, which has a consistent set of aws artifacts and private versions of the dependencies. It is huge -but takes away all issues related to transitive dependencies
Logo

更多推荐