如何R中使用sparklyr阅读S3文件夹/桶的所有文件?

问题描述 投票:2回答:1

我曾尝试下面的代码和它的组合,以读取在S3文件夹中提供的所有文件,但似乎没有奏效..敏感信息/代码从下面的脚本文件删除。有每6.5 GB的6个文件。

#Spark Connection
sc<-spark_connect(master = "local" , config=config)


rd_1<-spark_read_csv(sc,name = "Retail_1",path = "s3a://mybucket/xyzabc/Retail_Industry/*/*",header = F,delimiter = "|")


# This is the S3 bucket/folder for files [One of the file names Industry_Raw_Data_000]
s3://mybucket/xyzabc/Retail_Industry/Industry_Raw_Data_000

这个错误我得到

Error: org.apache.spark.sql.AnalysisException: Path does not exist: s3a://mybucket/xyzabc/Retail_Industry/*/*;
at org.apache.spark.sql.execution.datasources.DataSource$.org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary(DataSource.scala:710)
r apache-spark amazon-s3 rstudio sparklyr
1个回答
2
投票

在谷歌上搜索这个问题花费几个星期之后,也就迎刃而解了。在这里,解决方案..

Sys.setenv(AWS_ACCESS_KEY_ID="abc") 
Sys.setenv(AWS_SECRET_ACCESS_KEY="xyz")

config<-spark_config()

config$sparklyr.defaultPackages <- c(
"com.databricks:spark-csv_2.10:1.5.0",
"com.amazonaws:aws-java-sdk-pom:1.10.34",
"org.apache.hadoop:hadoop-aws:2.7.3")



#Spark Connection
sc<-spark_connect(master = "local" , config=config)

# hadoop configurations
ctx <- spark_context(sc)
jsc <- invoke_static( sc,
"org.apache.spark.api.java.JavaSparkContext",
"fromSparkContext",
ctx
)

hconf <- jsc %>% invoke("hadoopConfiguration")  
hconf %>% invoke("set", "com.amazonaws.services.s3a.enableV4", "true")
hconf %>% invoke("set", "fs.s3a.fast.upload", "true")

folder_files<-"s3a://mybucket/abc/xyz"

rd_11<-spark_read_csv(sc,name = "Retail",path=folder_files,infer_schema = TRUE,header = F,delimiter = "|")


spark_disconnect(sc)
© www.soinside.com 2019 - 2024. All rights reserved.