无法在纱线簇模式下读取带有火花的Hbase数据

问题描述 投票:0回答:1

version:CDH-6.2.1,spark:2.4.0,hbase:2.0我的工作:通过Spark读取HBase数据当我使用想法并且本地模式可以正常工作时,但是当通过spark-submit --master yarn模式时,它将报告一个异常,以下是详细的日志,希望有人能帮助我,非常感谢20/05/20 11:00:46错误mapreduce.TableInputFormat:java.io.IOException:java.lang.reflect.InvocationTargetException在org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:221)在org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:114)在org.apache.hadoop.hbase.mapreduce.TableInputFormat.initialize(TableInputFormat.java:200)在org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:243)在org.apache.hadoop.hbase.mapreduce.TableInputFormat.getSplits(TableInputFormat.java:254)在org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:131)在org.apache.spark.rdd.RDD $$ anonfun $ partitions $ 2.apply(RDD.scala:253)在org.apache.spark.rdd.RDD $$ anonfun $ partitions $ 2.apply(RDD.scala:251)在scala.Option.getOrElse(Option.scala:121)在org.apache.spark.rdd.RDD.partitions(RDD.scala:251)在org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49)在org.apache.spark.rdd.RDD $$ anonfun $ partitions $ 2.apply(RDD.scala:253)在org.apache.spark.rdd.RDD $$ anonfun $ partitions $ 2.apply(RDD.scala:251)在scala.Option.getOrElse(Option.scala:121)在org.apache.spark.rdd.RDD.partitions(RDD.scala:251)在org.apache.spark.SparkContext.runJob(SparkContext.scala:2146)在org.apache.spark.rdd.RDD $$ anonfun $ collect $ 1.apply(RDD.scala:945)在org.apache.spark.rdd.RDDOperationScope $ .withScope(RDDOperationScope.scala:151)在org.apache.spark.rdd.RDDOperationScope $ .withScope(RDDOperationScope.scala:112)在org.apache.spark.rdd.RDD.withScope(RDD.scala:363)在org.apache.spark.rdd.RDD.collect(RDD.scala:944)在com.song.HbaseOnSpark1 $ .main(HbaseOnSpark1.scala:32)在com.song.HbaseOnSpark1.main(HbaseOnSpark1.scala)在sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)处在sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)在java.lang.reflect.Method.invoke(Method.java:498)在org.apache.spark.deploy.yarn.ApplicationMaster $$ anon $ 2.run(ApplicationMaster.scala:673)造成原因:java.lang.reflect.InvocationTargetException在sun.reflect.NativeConstructorAccessorImpl.newInstance0(本机方法)处在sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)在sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)在java.lang.reflect.Constructor.newInstance(Constructor.java:423)在org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:219)...另外27个造成原因:java.lang.NullPointerException在org.apache.hadoop.hbase.client.ConnectionImplementation.close(ConnectionImplementation.java:1938)在org.apache.hadoop.hbase.client.ConnectionImplementation。(ConnectionImplementation.java:310)... 32更多

20/05/20 11:00:46错误yarn.ApplicationMaster:用户类引发异常:java.io.IOException:由于先前的错误,无法创建记录读取器。请查看任务完整日志中的前几行日志,以获取更多详细信息。java.io.IOException:由于先前的错误,无法创建记录读取器。请查看任务完整日志中的前几行日志,以获取更多详细信息。在org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:254)在org.apache.hadoop.hbase.mapreduce.TableInputFormat.getSplits(TableInputFormat.java:254)在org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:131)在org.apache.spark.rdd.RDD $$ anonfun $ partitions $ 2.apply(RDD.scala:253)在org.apache.spark.rdd.RDD $$ anonfun $ partitions $ 2.apply(RDD.scala:251)在scala.Option.getOrElse(Option.scala:121)在org.apache.spark.rdd.RDD.partitions(RDD.scala:251)在org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49)在org.apache.spark.rdd.RDD $$ anonfun $ partitions $ 2.apply(RDD.scala:253)在org.apache.spark.rdd.RDD $$ anonfun $ partitions $ 2.apply(RDD.scala:251)在scala.Option.getOrElse(Option.scala:121)在org.apache.spark.rdd.RDD.partitions(RDD.scala:251)在org.apache.spark.SparkContext.runJob(SparkContext.scala:2146)在org.apache.spark.rdd.RDD $$ anonfun $ collect $ 1.apply(RDD.scala:945)在org.apache.spark.rdd.RDDOperationScope $ .withScope(RDDOperationScope.scala:151)在org.apache.spark.rdd.RDDOperationScope $ .withScope(RDDOperationScope.scala:112)在org.apache.spark.rdd.RDD.withScope(RDD.scala:363)在org.apache.spark.rdd.RDD.collect(RDD.scala:944)在com.song.HbaseOnSpark1 $ .main(HbaseOnSpark1.scala:32)在com.song.HbaseOnSpark1.main(HbaseOnSpark1.scala)在sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)处在sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)在java.lang.reflect.Method.invoke(Method.java:498)在org.apache.spark.deploy.yarn.ApplicationMaster $$ anon $ 2.run(ApplicationMaster.scala:673)原因:java.lang.IllegalStateException:输入格式实例未正确初始化。确保在构造函数或initialize方法中调用initializeTable在org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getTable(TableInputFormatBase.java:558)在org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:249)... 24更多

apache-spark hbase cloudera-cdh
1个回答
0
投票

它在您的集群中的hbase classpatth问题,但是您需要像这样将hbase jar添加到您的类路径中>]

 export SPARK_CLASSPATH=$SPARK_CLASSPATH:`hbase classpath`

hbase classpath将为hbase连接提供所有jar,等等。

为什么它在本地模式下工作?

由于所需的所有罐子都在ide lib中存在


如果使用的是maven,请执行mvn depdency:tree以了解集群中需要哪些jar。基于此,您可以调整您的spark-submit脚本。

如果使用--jars选项,则在打包jar时,所有传递的jar或超级jar具有正确的依赖关系。

进一步阅读Spark spark-submit --jars arguments wants comma list, how to declare a directory of jars?

© www.soinside.com 2019 - 2024. All rights reserved.