元数据错误:org.apache.thrift.transport.TTransportException

问题描述 投票:0回答:2

这个错误是什么意思? “元数据错误:org.apache.thrift.transport.TTransportException?” 在什么情况下会出现此错误?

我在创建表以及将数据加载到表中时遇到此错误。

hadoop hive bigdata cloudera-cdh
2个回答
0
投票

org.apache.thrift.transport.TTransportException,这是一个非常普遍的错误,该消息描述 hiveserver 有问题并建议您查看 Hive 日志。如果您能够访问完整的日志堆栈并共享确切的详细信息,则可能会找到此问题的真正原因。大多数时候我遇到这个错误就像 hive 元数据问题、无法访问 hive 元数据、目录权限问题、并发相关问题、hiveserver 端口相关问题。

您可以尝试重新启动并重新创建表。或者在启动服务器之前设置 hive 端口可能会对您有所帮助。

    $export HIVE_PORT=10000
    $hive --service hiveserver

也可能有其他原因,但一旦我们获得完整的日志堆栈,我们就可以查看那里。


0
投票

对我来说同样的错误。它一开始连接成功,但一旦尝试读取表就中断。知道发生了什么事吗?

2024-02-20T01:53:00,718 INFO [task-runner-0-priority-0] org.apache.hadoop.hive.metastore.HiveMetaStoreClient - Connected to metastore.
2024-02-20T01:53:00,718 INFO [task-runner-0-priority-0] org.apache.hadoop.hive.metastore.RetryingMetaStoreClient - RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.metastore.HiveMetaStoreClient ugi=druid (auth:SIMPLE) retries=1 delay=1 lifetime=0
2024-02-20T01:53:00,890 ERROR [task-runner-0-priority-0] org.apache.hadoop.hive.metastore.utils.MetaStoreUtils - Got exception: org.apache.thrift.transport.TTransportException null
org.apache.thrift.transport.TTransportException: null
    at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132) ~[libthrift-0.9.3.jar:0.9.3]
    at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86) ~[libthrift-0.9.3.jar:0.9.3]
    at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429) ~[libthrift-0.9.3.jar:0.9.3]
    at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318) ~[libthrift-0.9.3.jar:0.9.3]
    at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219) ~[libthrift-0.9.3.jar:0.9.3]
    at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:77) ~[libthrift-0.9.3.jar:0.9.3]
    at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_all_tables(ThriftHiveMetastore.java:1999) ~[hive-standalone-metastore-3.1.3.jar:3.1.3]
    at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_all_tables(ThriftHiveMetastore.java:1986) ~[hive-standalone-metastore-3.1.3.jar:3.1.3]
    at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getAllTables(HiveMetaStoreClient.java:1727) ~[hive-standalone-metastore-3.1.3.jar:3.1.3]
    at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getAllTables(HiveMetaStoreClient.java:1718) ~[hive-standalone-metastore-3.1.3.jar:3.1.3]
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_401]
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_401]
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_401]
    at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_401]
    at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:208) ~[hive-standalone-metastore-3.1.3.jar:3.1.3]
    at com.sun.proxy.$Proxy123.getAllTables(Unknown Source) ~[?:?]
    at org.apache.iceberg.hive.HiveCatalog.lambda$listTables$0(HiveCatalog.java:119) ~[iceberg-spark-runtime-3.3_2.12-1.0.0.jar:?]
    at org.apache.iceberg.ClientPoolImpl.run(ClientPoolImpl.java:58) ~[iceberg-spark-runtime-3.3_2.12-1.0.0.jar:?]
    at org.apache.iceberg.ClientPoolImpl.run(ClientPoolImpl.java:51) ~[iceberg-spark-runtime-3.3_2.12-1.0.0.jar:?]
    at org.apache.iceberg.hive.CachedClientPool.run(CachedClientPool.java:82) ~[iceberg-spark-runtime-3.3_2.12-1.0.0.jar:?]
    at org.apache.iceberg.hive.HiveCatalog.listTables(HiveCatalog.java:119) ~[iceberg-spark-runtime-3.3_2.12-1.0.0.jar:?]
    at org.apache.druid.iceberg.input.IcebergCatalog.extractSnapshotDataFiles(IcebergCatalog.java:74) ~[druid-iceberg-extensions-28.0.0-SNAPSHOT.jar:28.0.0-SNAPSHOT]
    at org.apache.druid.iceberg.input.IcebergInputSource.retrieveIcebergDatafiles(IcebergInputSource.java:174) ~[druid-iceberg-extensions-28.0.0-SNAPSHOT.jar:28.0.0-SNAPSHOT]
    at org.apache.druid.iceberg.input.IcebergInputSource.reader(IcebergInputSource.java:105) ~[druid-iceberg-extensions-28.0.0-SNAPSHOT.jar:28.0.0-SNAPSHOT]
    at org.apache.druid.indexing.common.task.AbstractBatchIndexTask.inputSourceReader(AbstractBatchIndexTask.java:215) ~[druid-indexing-service-28.0.0-SNAPSHOT.jar:28.0.0-SNAPSHOT]
    at org.apache.druid.indexing.common.task.IndexTask.collectIntervalsAndShardSpecs(IndexTask.java:784) ~[druid-indexing-service-28.0.0-SNAPSHOT.jar:28.0.0-SNAPSHOT]
    at org.apache.druid.indexing.common.task.IndexTask.createShardSpecsFromInput(IndexTask.java:708) ~[druid-indexing-service-28.0.0-SNAPSHOT.jar:28.0.0-SNAPSHOT]
    at org.apache.druid.indexing.common.task.IndexTask.determineShardSpecs(IndexTask.java:671) ~[druid-indexing-service-28.0.0-SNAPSHOT.jar:28.0.0-SNAPSHOT]
    at org.apache.druid.indexing.common.task.IndexTask.runTask(IndexTask.java:516) ~[druid-indexing-service-28.0.0-SNAPSHOT.jar:28.0.0-SNAPSHOT]
    at org.apache.druid.indexing.common.task.AbstractTask.run(AbstractTask.java:178) ~[druid-indexing-service-28.0.0-SNAPSHOT.jar:28.0.0-SNAPSHOT]
    at org.apache.druid.indexing.common.task.batch.parallel.ParallelIndexSupervisorTask.runSequential(ParallelIndexSupervisorTask.java:1212) ~[druid-indexing-service-28.0.0-SNAPSHOT.jar:28.0.0-SNAPSHOT]
    at org.apache.druid.indexing.common.task.batch.parallel.ParallelIndexSupervisorTask.runTask(ParallelIndexSupervisorTask.java:551) ~[druid-indexing-service-28.0.0-SNAPSHOT.jar:28.0.0-SNAPSHOT]
    at org.apache.druid.indexing.common.task.AbstractTask.run(AbstractTask.java:178) ~[druid-indexing-service-28.0.0-SNAPSHOT.jar:28.0.0-SNAPSHOT]
    at org.apache.druid.indexing.overlord.SingleTaskBackgroundRunner$SingleTaskBackgroundRunnerCallable.call(SingleTaskBackgroundRunner.java:478) ~[druid-indexing-service-28.0.0-SNAPSHOT.jar:28.0.0-SNAPSHOT]
    at org.apache.druid.indexing.overlord.SingleTaskBackgroundRunner$SingleTaskBackgroundRunnerCallable.call(SingleTaskBackgroundRunner.java:450) ~[druid-indexing-service-28.0.0-SNAPSHOT.jar:28.0.0-SNAPSHOT]
    at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:131) ~[guava-31.1-jre.jar:?]
    at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:74) ~[guava-31.1-jre.jar:?]
    at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:82) ~[guava-31.1-jre.jar:?]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_401]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_401]
    at java.lang.Thread.run(Thread.java:750) ~[?:1.8.0_401]

© www.soinside.com 2019 - 2024. All rights reserved.