从包中导入手动声明的嵌套架构会导致 NullPointerSchema

问题描述 投票:0回答:1

我正在尝试使用 Databricks 的 spark-xml 和这行代码将 XML 文件解析为 DataFrame:

val xmlDF = spark
    .read
    .option("rowTag", "MeterReadingDocument")
    .option("valueTag", "foo") // meaningless, used to parse tags with no child elements
    .option("inferSchema", "false")
    .schema(schema)
    .xml(connectionString)

如您所见,我提供了一个模式,以避免模式推断的昂贵操作。该模式定义为

 val schema = MyProjectUtils.Schemas.meterReadingDocumentSchema

其中

MyProjectUtils
是一个包,其中包含具有架构定义的对象
Schemas

object Schemas {
...
// nested schemas 
...

val meterReadingDocumentSchema = StructType(
    Array(
      StructField("ReadingStatusRefTable", readingStatusRefTableSchema, nullable = true),
      StructField("Header", headerSchema, nullable = true),
      StructField("ImportExportParameters", importExportParametersSchema, nullable = true),
      StructField("Channels", channelsSchema, nullable = true),
      StructField("_xmlns:xsd", StringType, nullable = true),
      StructField("_xmlns:xsi", StringType, nullable = true)
    )
  )
}

您会注意到

readingStatusRefTableSchema
headerSchema
和其他自定义架构
StructTypes
对应于 XML 中的嵌套元素。这些也依次嵌套,例如:

val headerSchema = StructType(
    Array(
      StructField("Creation_Datetime", creationDatetimeSchema, nullable = true),
      StructField("Export_Template", exportTemplateSchema, nullable = true),
      StructField("System", SystemSchema, nullable = true),
      StructField("Path", pathSchema, nullable = true),
      StructField("Timezone", timezoneSchema, nullable = true)
    )
  )

val creationDatetimeSchema = StructType(
    Array(
      StructField("_Datetime", TimestampType, nullable = true),
      StructField("foo", StringType, nullable = true)
    )
  )

(如果相关,我可以提供有关模式嵌套性质的更多详细信息)

如果我在笔记本上声明这些嵌套模式,或者将这些嵌套模式声明为我用来读取数据的笔记本中的对象,则这将起作用并加载数据。但是当我从这个项目创建一个 jar 并执行它时,我得到以下堆栈跟踪:


INFO ApplicationMaster [shutdown-hook-0]: Unregistering ApplicationMaster with FAILED (diag message: User class threw exception: java.lang.NullPointerException
    at org.apache.spark.sql.types.ArrayType.existsRecursively(ArrayType.scala:102)
    at org.apache.spark.sql.types.StructType.$anonfun$existsRecursively$1(StructType.scala:508)
    at org.apache.spark.sql.types.StructType.$anonfun$existsRecursively$1$adapted(StructType.scala:508)
    at scala.collection.IndexedSeqOptimized.prefixLengthImpl(IndexedSeqOptimized.scala:41)
    at scala.collection.IndexedSeqOptimized.exists(IndexedSeqOptimized.scala:49)
    at scala.collection.IndexedSeqOptimized.exists$(IndexedSeqOptimized.scala:49)
    at scala.collection.mutable.ArrayOps$ofRef.exists(ArrayOps.scala:198)
    at org.apache.spark.sql.types.StructType.existsRecursively(StructType.scala:508)
    at org.apache.spark.sql.types.StructType.$anonfun$existsRecursively$1(StructType.scala:508)
    at org.apache.spark.sql.types.StructType.$anonfun$existsRecursively$1$adapted(StructType.scala:508)
    at scala.collection.IndexedSeqOptimized.prefixLengthImpl(IndexedSeqOptimized.scala:41)
    at scala.collection.IndexedSeqOptimized.exists(IndexedSeqOptimized.scala:49)
    at scala.collection.IndexedSeqOptimized.exists$(IndexedSeqOptimized.scala:49)
    at scala.collection.mutable.ArrayOps$ofRef.exists(ArrayOps.scala:198)
    at org.apache.spark.sql.types.StructType.existsRecursively(StructType.scala:508)
    at org.apache.spark.sql.catalyst.util.CharVarcharUtils$.hasCharVarchar(CharVarcharUtils.scala:56)
    at org.apache.spark.sql.catalyst.util.CharVarcharUtils$.failIfHasCharVarchar(CharVarcharUtils.scala:63)
    at org.apache.spark.sql.DataFrameReader.schema(DataFrameReader.scala:76)
    at com.mycompany.DataIngestion$.delayedEndpoint$com$mycompany$DataIngestion$1(DataIngestion.scala:44)
    at com.mycompany.DataIngestion$delayedInit$body.apply(DataIngestion.scala:10)
    at scala.Function0.apply$mcV$sp(Function0.scala:39)
    at scala.Function0.apply$mcV$sp$(Function0.scala:39)
    at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:17)
    at scala.App.$anonfun$main$1$adapted(App.scala:80)
    at scala.collection.immutable.List.foreach(List.scala:431)
    at scala.App.main(App.scala:80)
    at scala.App.main$(App.scala:78)
    at com.mycompany.DataIngestion$.main(DataIngestion.scala:10)
    at com.mycompany.DataIngestion.main(DataIngestion.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:739)
)

我添加了另一个更简单的 csv 文件,并在

Schemas
对象中为其创建了一个架构。此模式没有来自同一
Schemas
对象和书写的嵌套结构。

 val simplerDocSchema = MyProjectUtils.Schemas.anotherDocSchema

spark
      .read
      .csv(path)
      .schema(simplerDocSchema)
      .load(connectionString)
Schemas {
 ...
val anotherDocSchema: StructType = StructType(
    Array(
      StructField("ID", StringType, nullable = true),
      StructField("DATE", StringType, nullable = true),
      StructField("CODE", StringType, nullable = true),
      StructField("AD", StringType, nullable = true),
      StructField("ACCOUNT", StringType, nullable = true)
    )
  )
}

我预计这也会失败,但在已编译的项目和笔记本中运行正常

scala apache-spark xml-parsing apache-spark-xml
1个回答
0
投票

虽然您没有说明您使用的是哪个 Spark 版本,但代码似乎 已经 8 年没有改变了

override private[spark] def existsRecursively(f: (DataType) => Boolean): Boolean = {
    f(this) || elementType.existsRecursively(f)
  }

elementType 很可能为 null。由于您没有提供完整的代码,我猜您有一个尚未定义的 ArrayType(someVal, ..) 。

将您的 val 替换为 def 并重试。

© www.soinside.com 2019 - 2024. All rights reserved.