使用MapR MultipleOutputs写入OrcNewOutputFormat时出错

问题描述 投票:2回答:1

我们从ORC文件中读取数据,并使用MultipleOutputs将其写回到ORC和Parquet格式。我们的工作仅是Map,没有reducer。在某些情况下,我们会遇到以下错误,导致整个工作失败。我认为这两个错误都是相关的,但不确定为什么这些错误并不是每项工作都来的。让我知道是否需要更多信息。

Error: java.lang.RuntimeException: Overflow of newLength. smallBuffer.length=1073741824, nextElemLength=300947

Error: java.lang.ArrayIndexOutOfBoundsException: 1000
    at org.apache.orc.impl.writer.StringTreeWriter.writeBatch(StringTreeWriter.java:70)
    at org.apache.orc.impl.writer.StructTreeWriter.writeRootBatch(StructTreeWriter.java:56)
    at org.apache.orc.impl.WriterImpl.addRowBatch(WriterImpl.java:546)
    at org.apache.hadoop.hive.ql.io.orc.WriterImpl.flushInternalBatch(WriterImpl.java:297)
    at org.apache.hadoop.hive.ql.io.orc.WriterImpl.close(WriterImpl.java:334)
    at org.apache.hadoop.hive.ql.io.orc.OrcNewOutputFormat$OrcRecordWriter.close(OrcNewOutputFormat.java:67)
    at org.apache.hadoop.mapreduce.lib.output.MultipleOutputs$RecordWriterWithCounter.close(MultipleOutputs.java:375)
    at org.apache.hadoop.mapreduce.lib.output.MultipleOutputs.close(MultipleOutputs.java:574)


Error: java.lang.NullPointerException
    at java.lang.System.arraycopy(Native Method)
    at org.apache.orc.impl.DynamicByteArray.add(DynamicByteArray.java:115)
    at org.apache.orc.impl.StringRedBlackTree.addNewKey(StringRedBlackTree.java:48)
    at org.apache.orc.impl.StringRedBlackTree.add(StringRedBlackTree.java:60)
    at org.apache.orc.impl.writer.StringTreeWriter.writeBatch(StringTreeWriter.java:70)
    at org.apache.orc.impl.writer.StructTreeWriter.writeRootBatch(StructTreeWriter.java:56)
    at org.apache.orc.impl.WriterImpl.addRowBatch(WriterImpl.java:546)
    at org.apache.hadoop.hive.ql.io.orc.WriterImpl.flushInternalBatch(WriterImpl.java:297)
    at org.apache.hadoop.hive.ql.io.orc.WriterImpl.close(WriterImpl.java:334)
    at org.apache.hadoop.hive.ql.io.orc.OrcNewOutputFormat$OrcRecordWriter.close(OrcNewOutputFormat.java:67)
    at org.apache.hadoop.mapreduce.lib.output.MultipleOutputs$RecordWriterWithCounter.close(MultipleOutputs.java:375)
    at org.apache.hadoop.mapreduce.lib.output.MultipleOutputs.close(MultipleOutputs.java:574)
java hadoop mapreduce amazon-emr
1个回答
0
投票

在我的情况下,解决方案是将orc.rows.between.memory.checks(或spark.hadoop.orc.rows.between.memory.checks)从5000(默认值)更改为1

因为似乎ORC编写器似乎无法处理将异常大的行添加到条带。

该值可能可以进一步调整以达到更好的安全性能平衡。

© www.soinside.com 2019 - 2024. All rights reserved.