写入hdfs路径时出现错误java.io.IOException:重命名失败

问题描述 投票:0回答:2

我正在使用spark-sql-2.4.1v,它正在使用hadoop-2.6.5.jar版本。我需要先将数据保存在hdfs上,然后再移至cassandra。因此,我试图将数据保存在hdfs上,如下所示:

String hdfsPath = "/user/order_items/";
cleanedDs.createTempViewOrTable("source_tab");

givenItemList.parallelStream().forEach( item -> {   
    String query = "select $item  as itemCol , avg($item) as mean groupBy year";
    Dataset<Row> resultDs = sparkSession.sql(query);

    saveDsToHdfs(hdfsPath, resultDs );   
});


public static void saveDsToHdfs(String parquet_file, Dataset<Row> df) {
    df.write()                                 
      .format("parquet")
      .mode("append")
      .save(parquet_file);
    logger.info(" Saved parquet file :   " + parquet_file + "successfully");
}

当我在群集上运行我的作业时,它不会引发此错误:

java.io.IOException: Failed to rename FileStatus{path=hdfs:/user/order_items/_temporary/0/_temporary/attempt_20180626192453_0003_m_000007_59/part-00007.parquet; isDirectory=false; length=952309; replication=1; blocksize=67108864; modification_time=1530041098000; access_time=0; owner=; group=; permission=rw-rw-rw-; isSymlink=false} to hdfs:/user/order_items/part-00007.parquet
    at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.mergePaths(FileOutputCommitter.java:415)

请提出如何解决此问题的建议?

apache-spark hadoop apache-spark-sql hdfs hadoop2
2个回答
5
投票

您可以在一个工作中完成所有选择,将所有选择并合并在一个表中。

Dataset<Row> resultDs = givenItemList.parallelStream().map( item -> {   
    String query = "select $item  as itemCol , avg($item) as mean groupBy year";
    return sparkSession.sql(query);
}).reduce((a, b) -> a.union(b)).get

saveDsToHdfs(hdfsPath, resultDs );

1
投票

错误是您正尝试将数据帧写入给定ItemList集合中每个项目的相同位置。通常如果这样做应该给出错误

OutputDirectory已经存在

但是由于foreach函数将在并行线程中执行所有项目,因此您会收到此错误。您可以像这样为每个线程分别指定目录

givenItemList.parallelStream().forEach( item -> {   
String query = "select $item  as itemCol , avg($item) as mean groupBy year";
Dataset<Row> resultDs = sparkSession.sql(query);
saveDsToHdfs(Strin.format("%s_item",hdfsPath), resultDs );   

});

否则,您也可以在hdfspath下有这样的子目录

givenItemList.parallelStream().forEach( item -> {   
String query = "select $item  as itemCol , avg($item) as mean groupBy year";
Dataset<Row> resultDs = sparkSession.sql(query);

saveDsToHdfs(Strin.format("%s/item",hdfsPath), resultDs );   

});`

© www.soinside.com 2019 - 2024. All rights reserved.