ERROR AzureNativeFileSystemStore:DirectoryIsNotEmpty

问题描述 投票:0回答:1

我正在尝试在Azure HdInsigth中执行此代码。我有一个与Data Lake Storage连接的集群Spark。

spark.conf.set(
"fs.azure.sas.data.spmdevsharedstorage.blob.core.windows.net",
"xxxxxxxxxxx key xxxxxxxxxxx"
)


val shared_data = "wasbs://[email protected]/"

//Read Csv
val dfCsv = spark.read.option("inferSchema", "true").option("header", true).csv(shared_data + "/test/4G-pixel.csv")
val dfCsv_final_withcolumn = dfCsv.select($"latitude",$"longitude")
val dfCsv_final = dfCsv_final_withcolumn.withColumn("new_latitude",col("latitude")*100)

//write
dfCsv_final.coalesce(1).write.format("com.databricks.spark.csv").option("header", "true").mode("overwrite").save(shared_data + "/test/4G-pixel_edit.csv")

该代码很好地读取了csv文件。因此,在写入新文件csv时,我看到以下错误:

20/04/03 14:58:12 ERROR AzureNativeFileSystemStore: Encountered Storage Exception for delete on Blob: https://spmdevsharedstorage.blob.core.windows.net/data/test/4G-pixel_edit.csv/_temporary/0, Exception Details: This operation is not permitted on a non-empty directory. Error Code: DirectoryIsNotEmpty
org.apache.hadoop.fs.azure.AzureException: com.microsoft.azure.storage.StorageException: This operation is not permitted on a non-empty directory.
  at org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.delete(AzureNativeFileSystemStore.java:2627)
  at org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.delete(AzureNativeFileSystemStore.java:2637)

将新文件csv写入Data Lake,但是代码停止。我需要您不要看到此错误。我该如何解决?

scala azure apache-spark hadoop hdinsight
1个回答
0
投票

我遇到了类似的问题。

我通过使用以下配置解决了它。将其设置为true。

--conf spark.hadoop.mapreduce.fileoutputcommitter.cleanup.skipped=true

spark.conf.set("spark.hadoop.mapreduce.fileoutputcommitter.cleanup.skipped","true")
© www.soinside.com 2019 - 2024. All rights reserved.