Spark-SQL数据帧计数给出java.lang.ArrayIndexOutOfBoundsException

问题描述 投票:2回答:1

我正在使用Apache Spark 2.3.1版创建数据框。当我尝试计算数据帧时,出现以下错误:

org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage 1.0 (TID 12, analitik11.{hostname}, executor 1): java.lang.ArrayIndexOutOfBoundsException: 2
        at org.apache.spark.sql.vectorized.ColumnarBatch.column(ColumnarBatch.java:98)
        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.datasourcev2scan_nextBatch_0$(Unknown Source)
        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
        at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
        at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10$$anon$1.hasNext(WholeStageCodegenExec.scala:614)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
        at org.apache.spark.scheduler.Task.run(Task.scala:109)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:
  at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1602)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1590)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1589)
  at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
  at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
  at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1589)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
  at scala.Option.foreach(Option.scala:257)
  at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)
  at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1823)
  at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1772)
  at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1761)
  at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
  at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:2034)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:2055)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:2074)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:2099)
  at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:939)
  at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
  at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
  at org.apache.spark.rdd.RDD.collect(RDD.scala:938)
  at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:297)
  at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2770)
  at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2769)
  at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3254)
  at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
  at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3253)
  at org.apache.spark.sql.Dataset.count(Dataset.scala:2769)
  ... 49 elided
Caused by: java.lang.ArrayIndexOutOfBoundsException: 2
  at org.apache.spark.sql.vectorized.ColumnarBatch.column(ColumnarBatch.java:98)
  at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.datasourcev2scan_nextBatch_0$(Unknown Source)
  at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
  at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
  at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10$$anon$1.hasNext(WholeStageCodegenExec.scala:614)
  at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
  at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
  at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
  at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
  at org.apache.spark.scheduler.Task.run(Task.scala:109)
  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
  at java.lang.Thread.run(Thread.java:745)

我们使用com.hortonworks.spark.sql.hive.llap.HiveWarehouseBuilder连接到Hive并从中读取表。生成数据框的代码如下:

    val hive = com.hortonworks.spark.sql.hive.llap.HiveWarehouseBuilder.session(spark).build() 

    val edgesTest = hive.executeQuery("select trim(s_vno) as src ,trim(a_vno) as dst, share, administrator, account, all_share " +
      "from ebyn.babs_edges_2018 where (share <> 0 or administrator <> 0 or account <> 0 or all_share <> 0) and trim(date) = '201801'")

    val share_org_edges = edgesTest.alias("df1").
                                             join(edgesTest.alias("df2"), "src").
                                             where("df1.dst <> df2.dst").
                                             groupBy(
                                                  greatest("df1.dst", "df2.dst").as("src"), 
                                                  least("df1.dst", "df2.dst").as("dst")).
                                             agg(max("df1.share").as("share"), max("df1.administrator").as("administrator"), max("df1.account").as("account"), max("df1.all_share").as("all_share")).persist

share_org_edges.count

表属性如下:

CREATE TABLE `EBYN.BABS_EDGES_2018`(                                         
   `date` string,                                                            
   `a_vno` string,                                                            
   `s_vno` string,                                                            
   `amount` double,                                                        
   `num` int,                                                            
   `share` int,                                                               
   `share_ratio` int,                                                            
   `administrator` int,                                                            
   `account` int,                                                            
   `share-all` int)                                                        
 COMMENT 'Imported by sqoop on 2018/10/11 11:10:16'                           
 ROW FORMAT SERDE                                                             
   'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'                       
 WITH SERDEPROPERTIES (                                                       
   'field.delim'='',                                                         
   'line.delim'='\n',                                                         
   'serialization.format'='')                                                
 STORED AS INPUTFORMAT                                                        
   'org.apache.hadoop.mapred.TextInputFormat'                                 
 OUTPUTFORMAT                                                                 
   'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'               
 LOCATION                                                                     
   'hdfs://ggmprod/warehouse/tablespace/managed/hive/ebyn.db/babs_edges_2018' 
 TBLPROPERTIES (                                                              
   'bucketing_version'='2',                                                   
   'transactional'='true',                                                    
   'transactional_properties'='insert_only',                                  
   'transient_lastDdlTime'='1539245438')                            
scala apache-spark apache-spark-sql
1个回答
0
投票

问题edgesTest是具有逻辑计划的数据帧,其中逻辑计划包含唯一的DataSourceV2Relation节点。此DataSourceV2Relation逻辑计划节点包含一个可变的HiveWarehouseDataSourceReader,该变量将用于读取Hive表。edgesTest数据帧使用了两次:df1df2。在Spark逻辑计划优化期间,列修剪在同一HiveWarehouseDataSourceReader可变实例上发生了两次。通过设置第二列的必需列,可以修剪掉第二列。在执行期间,读取器将使用第二个列修剪所需的列向Hive仓库两次触发相同的查询。 Spark生成的代码将无法从Hive查询结果中找到预期的列。

解决方案火花2.4DataSourceV2得到了改进,尤其是通过SPARK-23203 DataSourceV2 should use immutable trees

火花2.3HiveWarehouseConnector数据源阅读器中禁用列修剪。

Hortonworks已经解决了此问题,如HDP 3.1.5 Release Notes所述。我们可以在其HiveWarehouseConnector github repository中找到更正:

    if (useSpark23xReader) {
      LOG.info("Using reader HiveWarehouseDataSourceReaderForSpark23x with column pruning disabled");
      return new HiveWarehouseDataSourceReaderForSpark23x(params);
    } else if (disablePruningPushdown) {
      LOG.info("Using reader HiveWarehouseDataSourceReader with column pruning and filter pushdown disabled");
      return new HiveWarehouseDataSourceReader(params);
    } else {
      LOG.info("Using reader PrunedFilteredHiveWarehouseDataSourceReader");
      return new PrunedFilteredHiveWarehouseDataSourceReader(params);
    }

另外,HDP 3.1.5 Hive integration doc指定:

为了防止此版本中的数据正确性问题,默认情况下,修剪和投影下推功能处于禁用状态。...为防止这些问题并确保结果正确,请不要启用修剪和下推功能。

© www.soinside.com 2019 - 2024. All rights reserved.