spark-submit不适用于我位于hdfs中的jar

问题描述 投票:0回答:1

这是我的处境:

Apache Spark版本2.4.4

Hadoop版本2.7.4

我的应用程序jar位于hdfs中。

我的火花提交看起来像这样:

/ software / spark-2.4.4-bin-hadoop2.7 / bin / spark-submit \--class com.me.MyClass --master spark://host2.local:7077 \-部署模式集群hdfs://host2.local:9000 / apps / myapps.jar

我收到此错误:线程“主”中的异常java.lang.NoSuchMethodError:org.apache.hadoop.tracing.SpanReceiverHost.get(Lorg / apache / hadoop / conf / Configuration; Ljava / lang / String;)Lorg / apache / hadoop / tracing / SpanReceiverHost ;在org.apache.hadoop.hdfs.DFSClient。(DFSClient.java:634)在org.apache.hadoop.hdfs.DFSClient。(DFSClient.java:619)在org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:149)在org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2598)在org.apache.hadoop.fs.FileSystem.access $ 200(FileSystem.java:91)在org.apache.hadoop.fs.FileSystem $ Cache.getInternal(FileSystem.java:2632)在org.apache.hadoop.fs.FileSystem $ Cache.get(FileSystem.java:2614)在org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)在org.apache.spark.deploy.DependencyUtils $$ anonfun $ resolveGlobPaths $ 2.apply(DependencyUtils.scala:144)在org.apache.spark.deploy.DependencyUtils $$ anonfun $ resolveGlobPaths $ 2.apply(DependencyUtils.scala:139)在scala.collection.TraversableLike $$ anonfun $ flatMap $ 1.apply(TraversableLike.scala:241)在scala.collection.TraversableLike $$ anonfun $ flatMap $ 1.apply(TraversableLike.scala:241)在scala.collection.IndexedSeqOptimized $ class.foreach(IndexedSeqOptimized.scala:33)在scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:35)在scala.collection.TraversableLike $ class.flatMap(TraversableLike.scala:241)在scala.collection.AbstractTraversable.flatMap(Traversable.scala:104)在org.apache.spark.deploy.DependencyUtils $ .resolveGlobPaths(DependencyUtils.scala:139)在org.apache.spark.deploy.DependencyUtils $$ anonfun $ resolveAndDownloadJars $ 1.apply(DependencyUtils.scala:61)在org.apache.spark.deploy.DependencyUtils $$ anonfun $ resolveAndDownloadJars $ 1.apply(DependencyUtils.scala:64)在scala.Option.map(Option.scala:146)在org.apache.spark.deploy.DependencyUtils $ .resolveAndDownloadJars(DependencyUtils.scala:60)在org.apache.spark.deploy.worker.DriverWrapper $ .setupDependencies(DriverWrapper.scala:96)在org.apache.spark.deploy.worker.DriverWrapper $ .main(DriverWrapper.scala:60)在org.apache.spark.deploy.worker.DriverWrapper.main(DriverWrapper.scala)

任何指针请问如何解决?谢谢。

apache-spark hadoop spark-submit
1个回答
0
投票

在jar文件可以提供帮助之前添加“ --jars”。

© www.soinside.com 2019 - 2024. All rights reserved.