列名称中的空格抛出异常,而镶木地板用于压缩

问题描述 投票:1回答:2

将数据插入到具有空格的列名称的镶木地板格式表中时,我遇到错误。

使用Cloudera版本的Hive客户端

CREATE TABLE testColumNames(First Name string)存储为镶木地板;插入testColumNames中选择'John Smith';

有没有解决此问题的解决方法?我们也从Spark 2.3代码中得到了这个错误。

org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.IllegalArgumentException: field ended by ';': expected ';' but got 'name' at line 1:   optional binary first name
at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:248)
at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketForFileIdx(FileSinkOperator.java:583)
at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketFiles(FileSinkOperator.java:527)
at org.apache.hadoop.hive.ql.exec.FileSinkOperator.processOp(FileSinkOperator.java:636)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815)
at org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815)
at org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:98)
at org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:157)
at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:497)
at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:170)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:459)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

Caused by: java.lang.IllegalArgumentException: field ended by ';': expected ';' but got 'name' at line 1:   optional binary first name
at parquet.schema.MessageTypeParser.check(MessageTypeParser.java:212)
at parquet.schema.MessageTypeParser.addPrimitiveType(MessageTypeParser.java:185)
at parquet.schema.MessageTypeParser.addType(MessageTypeParser.java:111)
at parquet.schema.MessageTypeParser.addGroupTypeFields(MessageTypeParser.java:99)
at parquet.schema.MessageTypeParser.parse(MessageTypeParser.java:92)
at parquet.schema.MessageTypeParser.parseMessageType(MessageTypeParser.java:82)
at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.getSchema(DataWritableWriteSupport.java:43)
at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.init(DataWritableWriteSupport.java:48)
at parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:310)
at parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:287)
at org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.<init>(ParquetRecordWriterWrapper.java:69)
at org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat.getParquerRecordWriterWrapper(MapredParquetOutputFormat.java:134)
at org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat.getHiveRecordWriter(MapredParquetOutputFormat.java:123)
at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getRecordWriter(HiveFileFormatUtils.java:260)
at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:245)
... 18 more

org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.IllegalArgumentException: field ended by ';': expected ';' but got 'name' at line 1:   optional binary first name
at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:248)
at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketForFileIdx(FileSinkOperator.java:583)
at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketFiles(FileSinkOperator.java:527)
at org.apache.hadoop.hive.ql.exec.FileSinkOperator.closeOp(FileSinkOperator.java:974)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:598)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:610)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:610)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:610)
at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.close(ExecMapper.java:199)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:459)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.lang.IllegalArgumentException: field ended by ';': expected ';' but got 'name' at line 1:   optional binary first name
at parquet.schema.MessageTypeParser.check(MessageTypeParser.java:212)
at parquet.schema.MessageTypeParser.addPrimitiveType(MessageTypeParser.java:185)
at parquet.schema.MessageTypeParser.addType(MessageTypeParser.java:111)
at parquet.schema.MessageTypeParser.addGroupTypeFields(MessageTypeParser.java:99)
at parquet.schema.MessageTypeParser.parse(MessageTypeParser.java:92)
at parquet.schema.MessageTypeParser.parseMessageType(MessageTypeParser.java:82)
at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.getSchema(DataWritableWriteSupport.java:43)
at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.init(DataWritableWriteSupport.java:48)
at parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:310)
at parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:287)
at org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.<init>(ParquetRecordWriterWrapper.java:69)
at org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat.getParquerRecordWriterWrapper(MapredParquetOutputFormat.java:134)
at org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat.getHiveRecordWriter(MapredParquetOutputFormat.java:123)
at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getRecordWriter(HiveFileFormatUtils.java:260)
at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:245)
... 16 more
apache-spark hive cloudera parquet
2个回答
1
投票

请参考以下网址:

https://issues.apache.org/jira/browse/PARQUET-677

看来这个问题还没有解决。


0
投票

来自Hive Doc https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL

表名和列名不区分大小写,但SerDe和属性名称区分大小写。

在Hive 0.12及更早版本中,表名和列名中只允许使用字母数字和下划线字符。

在Hive 0.13及更高版本中,列名可以包含任何Unicode字符(请参阅HIVE-6013),但是,点(。)和冒号(:)会在查询时产生错误,因此在Hive 1.2.0中不允许使用它们(请参阅HIVE-10120) )。在反引号(`)中指定的任何列名都按字面处理。在反引号字符串中,使用双反引号(``)来表示反引号字符。反引号引用还允许对表和列标识符使用保留关键字。

要恢复到0.13.0之前的行为并将列名限制为字母数字和下划线字符,请将配置属性hive.support.quoted.identifiers设置为none。在此配置中,反引号名称被解释为正则表达式。有关详细信息,请参阅支持列名称中的带引号的标识符。

© www.soinside.com 2019 - 2024. All rights reserved.