本地pyspark不能通过aws凭证配置文件或环境变量访问s3文件?

sigwle7e  于 2021-05-29  发布在  Spark
关注(0)|答案(0)|浏览(284)

我是pyspark的新手,在本地安装了pyspark和相关包,如下所示,用于为存储在awss3存储桶上的etl大数据建立本地开发/测试环境。

spark 2.4.5
Scala version 2.11.12
Java HotSpot(TM) 64-Bit Server VM, 1.8.0_231
hadoop-*.jars 2.7.3 (default hadoop jars that come with spark 2.4.5 package)

2 dependent jars downloaded to $SPARK_HOME/jars/: aws-java-sdk-1.7.4.jar, hadoop-aws-2.7.3.jar

要从本地sdk访问aws:~/.aws/credentials文件,该文件存储多个配置文件,[默认]配置文件存储:aws\u access\u key\u id、aws\u secret\u access\u key和aws\u session\u token作为一个指定用户。
理想情况下,您希望使用spark conf:defaultawscredentialsproviderchain从配置文件中检索凭据,但似乎不起作用(:com.amazonaws.amazonClient异常:无法从链中的任何提供程序加载aws凭据)。然后手动从该文件中提取凭据并将其提供给spark-conf:temporaryawscredentialsprovider(如下面的代码所示)或spark-env.sh conf文件中的环境变量,但状态代码为:403 forbidden。
下面是代码和错误的一些细节:
以下脚本尝试通过s3a方案从aws s3 bucket读取csv:

config = pyspark.SparkConf().setAll([
    ('fs.s3a.impl', 'org.apache.hadoop.fs.s3a.S3AFileSystem')
    ,("fs.s3a.aws.credentials.provider","com.amazonaws.auth.TemporaryAWSCredentialsProvider")
    ,("fs.s3a.access.key", "xxxx")
    ,("fs.s3a.secret.key", "xxxx")
    ,("fs.s3a.session.token", "xxxx")
])
sc = pyspark.SparkContext(conf=config)
sc.setSystemProperty("com.amazonaws.services.s3.enableV4", "true")
sql = SQLContext(sc)
df = (sql.read
         .format("csv")
         .option("header", "true")
         .load("s3a://bucket/prefix/file_name"))

错误如下,但密钥、密钥和令牌经boto3检查有效:

py4j.protocol.Py4JJavaError: An error occurred while calling o130.load.
: com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 403, AWS Service: Amazon S3, AWS Request ID: xxxx, AWS Error Code: null, AWS Error Message: Forbidden, S3 Extended Request ID: xxxx
    at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798)
    at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421)
    at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
    at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
    at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:976)
    at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:956)
    at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:892)
    at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:77)
    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1426)
    at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary$1.apply(DataSource.scala:557)
    at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary$1.apply(DataSource.scala:545)
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
    at scala.collection.immutable.List.foreach(List.scala:392)
    at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
    at scala.collection.immutable.List.flatMap(List.scala:355)
    at org.apache.spark.sql.execution.datasources.DataSource.org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary(DataSource.scala:545)
    at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:359)
    at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:178)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:567)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.base/java.lang.Thread.run(Thread.java:835)

我还尝试了以下设置:
(“fs.s3a.aws.credentials.provider”,“com.amazonaws.auth.defaultawscredentialsproviderchainawscredentialsprovider”)它给出错误:com.amazonaws.amazonClient异常:无法从com.amazonaws.auth.awscredentialsproviderchain.getcredentials(awscredentialsproviderchain)链中的任何提供程序加载aws凭据。java:117)
在spark-env.sh中设置环境变量:aws\u access\u key\u id、aws\u secret\u access\u key、aws\u session\u token,这些变量在boto3测试时对访问s3是有效的,它给出的错误与使用临时AwsCredentialsProvider:py4j.protocol.py4jjavaerror:调用o130.load时出错:com.amazonaws.services.s3.model.amazons3异常:状态代码:403,aws服务:amazon s3,aws请求id:x,aws错误代码:null,aws错误消息:禁止,s3扩展请求id:x
尝试了其他版本的aws-java-jdk.jar和hadoop-aws.jar本机版本,这些版本来自最新的hadoop安装3.2.1:aws-java-sdk-bundle-1.11.375.jar、hadoop-aws-3.2.1.jar、hadoop-common-3.2.1.jar或hadoop 2.8.5版本的jars。它给出了“nosuchmethod”错误:py4j.protocol.py4jjavaerror:调用o86.load时出错:noclassdeffounderror:org/apache/hadoop/fs/streamcapabilities
在这里真的很挣扎,你能不能建议一下这里可能出了什么问题,可能的解决办法是什么?非常感谢

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题