在 hadoop 集群上运行 Spark 管道时出现 java.lang.NoSuchFieldError: HIVE_LOCAL_TIME_ZONE 错误

问题描述 投票:0回答:1

我的java-spark代码是用Spark3.2.4和JDK1.8编写的,而运行时是2.11.12和JDK8。在触发火花提交之前,我将所有必要的罐子(uber-jar)捆绑在一起。我的 Maven 有一个单独的

hive-serde:3.1.4
依赖项(以解决另一个错误)。

我的管道涉及从 hive 表读取某些输入、处理并将输出转储回 hive。

此错误被抛出到云端。这是某种罐子冲突吗?

23/10/04 09:20:11 INFO yarn.ApplicationMaster: Final app status: FAILED, exitCode: 15, (reason: User class threw exception: java.lang.NoSuchFieldError: HIVE_LOCAL_TIME_ZONE
    at org.apache.hadoop.hive.serde2.lazy.LazySerDeParameters.extractColumnInfo(LazySerDeParameters.java:166)
    at org.apache.hadoop.hive.serde2.lazy.LazySerDeParameters.<init>(LazySerDeParameters.java:92)
    at org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe.initialize(LazySimpleSerDe.java:116)
    at org.apache.spark.sql.hive.execution.HiveTableScanExec.addColumnMetadataToConf(HiveTableScanExec.scala:125)
    at org.apache.spark.sql.hive.execution.HiveTableScanExec.hadoopConf$lzycompute(HiveTableScanExec.scala:101)
    at org.apache.spark.sql.hive.execution.HiveTableScanExec.hadoopConf(HiveTableScanExec.scala:98)
    at org.apache.spark.sql.hive.execution.HiveTableScanExec.hadoopReader$lzycompute(HiveTableScanExec.scala:110)
    at org.apache.spark.sql.hive.execution.HiveTableScanExec.hadoopReader(HiveTableScanExec.scala:105)
    at org.apache.spark.sql.hive.execution.HiveTableScanExec.$anonfun$doExecute$2(HiveTableScanExec.scala:211)
apache-spark hadoop apache-spark-sql
1个回答
0
投票

您找到解决方案了吗?

我似乎也遇到了同样的问题...

© www.soinside.com 2019 - 2024. All rights reserved.