Exception java.util.NoSuchElementException:无。在Spark数据集save()操作中获取

问题描述 投票:0回答:1

[当我尝试将数据集作为镶木地板保存到s3存储时,出现异常“ java.util.NoSuchElementException:None.get”:

例外:

java.lang.IllegalStateException: Failed to execute CommandLineRunner
at org.springframework.boot.SpringApplication.callRunner(SpringApplication.java:787)
at org.springframework.boot.SpringApplication.callRunners(SpringApplication.java:768)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:322)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1226)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1215)
...

Caused by: java.util.NoSuchElementException: None.get
at scala.None$.get(Option.scala:347)
at scala.None$.get(Option.scala:345)
at org.apache.spark.sql.execution.datasources.BasicWriteJobStatsTracker$.metrics(BasicWriteStatsTracker.scala:173)
at org.apache.spark.sql.execution.command.DataWritingCommand$class.metrics(DataWritingCommand.scala:51)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.metrics$lzycompute(InsertIntoHadoopFsRelationCommand.scala:47)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.metrics(InsertIntoHadoopFsRelationCommand.scala:47)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.metrics$lzycompute(commands.scala:100)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.metrics(commands.scala:100)
at org.apache.spark.sql.execution.SparkPlanInfo$.fromSparkPlan(SparkPlanInfo.scala:56)
at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:76)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:676)
at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:285)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:271)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:229)
at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:566)

似乎是与SparkContext有关的问题。我没有显式创建SparkContext实例,而是仅在源代码中使用SparkSession。

final SparkSession sparkSession = SparkSession
            .builder()
            .appName("Java Spark SQL job")
            .getOrCreate();

ds.write().mode("overwrite").parquet(path);

有任何建议或解决方法吗?谢谢

更新1:

ds的创建有点复杂,但是我将尝试如下列出主要的调用堆栈:

处理1:

    1. session.read()。parquet(path)作为源;
    1. ds.createOrReplaceTempView(view);
    1. sparkSession.sql(sql)as ds1;
    1. sparkSession.sql(sql)as ds2;
    1. ds1.save()
    1. ds2.save()

处理2:

在第6步之后,我使用相同的spark会话返回到第1步,以进行下一步。最后,在完成所有处理后,将调用sparkSession.stop()。

我可以在进程1完成之后找到日志,它看起来像在进程2之前表明SparkContext已被破坏:

INFO SparkContext: Successfully stopped SparkContext
apache-spark spark-java
1个回答
0
投票

只需删除sparkSession.stop()即可解决此问题。

最新问题
© www.soinside.com 2019 - 2024. All rights reserved.