我正在尝试使用 Spark 从 S3 读取 txt 文件,但出现此错误:
No FileSystem for scheme: s3
这是我的代码:
from pyspark import SparkContext, SparkConf
conf = SparkConf().setAppName("first")
sc = SparkContext(conf=conf)
data = sc.textFile("s3://"+AWS_ACCESS_KEY+":" + AWS_SECRET_KEY + "@/aaa/aaa/aaa.txt")
header = data.first()
这是完整的回溯:
An error occurred while calling o25.partitions.
: java.io.IOException: No FileSystem for scheme: s3
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2660)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:258)
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:229)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:194)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
at org.apache.spark.api.java.JavaRDDLike$class.partitions(JavaRDDLike.scala:61)
at org.apache.spark.api.java.AbstractJavaRDDLike.partitions(JavaRDDLike.scala:45)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:280)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:214)
at java.lang.Thread.run(Thread.java:748)
我该如何解决这个问题?
如果您使用的是本地计算机,则可以使用 boto3:
s3 = boto3.resource('s3')
# get a handle on the bucket that holds your file
bucket = s3.Bucket('yourBucket')
# get a handle on the object you want (i.e. your file)
obj = bucket.Object(key='yourFile.extension')
# get the object
response = obj.get()
# read the contents of the file and split it into a list of lines
lines = response[u'Body'].read().split('\n')
(不要忘记设置您的 AWS S3 凭证)。
如果您使用 AWS 虚拟机 (EC2),另一个干净的解决方案是 向 EC2 授予 S3 权限并使用以下命令启动 pyspark:
pyspark --packages com.amazonaws:aws-java-sdk-pom:1.10.34,org.apache.hadoop:hadoop-aws:2.7.2
如果添加其他包,请确保格式为:'groupId:artifactId:version',并且包之间用逗号分隔。
如果您使用 Jupyter Notebooks 中的 pyspark,这将有效:
import os
import pyspark
os.environ['PYSPARK_SUBMIT_ARGS'] = '--packages com.amazonaws:aws-java-sdk-pom:1.10.34,org.apache.hadoop:hadoop-aws:2.7.2 pyspark-shell'
from pyspark.sql import SQLContext
from pyspark import SparkContext
sc = SparkContext()
sqlContext = SQLContext(sc)
filePath = "s3a://yourBucket/yourFile.parquet"
df = sqlContext.read.parquet(filePath) # Parquet file read example
关于需要指定 Hadoop <-> AWS 依赖项,上述答案是正确的。
答案不包括较新版本的 Spark,因此我将发布对我有用的任何内容,特别是当 Spark 升级到 Hadoop 3.0 时,它已经发生了变化。
火花3.0.3Spark 3.2.x
--packages
org.apache.hadoop:hadoop-aws:2.10.2,org.apache.hadoop:hadoop-client:2.10.2
--exclude-packages
com.google.guava:guava
--packages
,您应该使用 org.apache.spark:spark-hadoop-cloud_2.12:3.2.0
库。问题是这个库不存在于
central maven 存储库中。 如何设置额外的依赖?
org.apache.spark:hadoop-cloud_2.12:<SPARK_VERSION>
spark-submit
和
--packages
参数在--exclude-packages
SparkSession.builder
和
spark.jars.packages
Spark 配置spark.jars.excludes
spark = (
SparkSession
.builder
.config("spark.jars.packages", "org.apache.hadoop:hadoop-aws:2.10.2,org.apache.hadoop:hadoop-client:2.10.2")
.config("spark.jars.excludes", "com.google.guava:guava")
.getOrCreate()
)
s3
上面将 s3a
添加到了 Spark 的类路径中。当您设置此 Spark 配置时,不仅
S3AFileSystem
而且 s3a://...
路径也将起作用:
s3://...
spark.hadoop.fs.s3.impl
使用 org.apache.hadoop.fs.s3a.S3AFileSystem
时,
set 正在使用 via
SparkSession.builder.config()
的
--conf
/home/ec2-user/anaconda3/envs/ENV-XXX/lib/python3.6/site-packages/pyspark/jars
这两个文件是:
hadoop-aws-2.10.1-amzn-0.jar
spark-submit
但是您还需要将
spark = SparkSession.builder\
.config("spark.jars.packages", "org.apache.spark:spark-hadoop-cloud_2.12:3.3.0")\
.config("spark.hadoop.fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")\
.getOrCreate()
jar 及其依赖项放入您的
spark-hadoop-cloud_2.12
中。还要检查 Maven Central 上 SPARK_HOME
jar 的依赖关系,以确保您下载的是正确的版本。将下面的内容调整到你的spark home,这里是spark-hadoop-cloud_2.12
/usr/local/lib/python3.10/site-packages/pyspark