使用HBASE客户端API创建一个大表

问题描述 投票:0回答:1

我正在使用scala中的客户端API使用谷歌云大表,我正在尝试使用单列系列创建一个表但我收到错误

以下是我写的代码:

`object TestBigtable {


  val columnFamilyName = Bytes.toBytes("cf1")

  def createConnection(ProjectId: String, InstanceID: String): Connection = {
    BigtableConfiguration.connect(ProjectId, InstanceID)
  }

  def createTableIfNotExists(connection: Connection, name: String) = {
    val tableName = TableName.valueOf(name)
    val admin = connection.getAdmin()
      if (!admin.tableExists(tableName)) {
        val tableDescriptor = new HTableDescriptor(tableName)
        tableDescriptor.addFamily(
          new HColumnDescriptor(columnFamilyName))
        admin.createTable(tableDescriptor)
      }
    }




  def runner(projectId: String,
             instanceId: String,
             tableName: String) = {
    val createTableConnection = createConnection(projectId, instanceId)
    try {
      createTableIfNotExists(createTableConnection, tableName)
    } finally {
      createTableConnection.close()
    }


  }`

一旦我执行了我的jar,我得到以下一组错误:

    18/07/25 10:36:20 INFO com.google.cloud.bigtable.grpc.BigtableSession: Bigtable options: BigtableOptions{dataHost=bigtable.googleapis.com, adminHost=bigtableadmin.googleapis.com, port=443, projectId=renault-ftt, instanceId=testfordeletion, appProfileId=, userAgent=hbase-1.4.3, credentialType=DefaultCredentials, dataChannelCount=4, retryOptions=RetryOptions{retriesEnabled=true, allowRetriesWithoutTimestamp=false, statusToRetryOn=[UNAUTHENTICATED, ABORTED, DEADLINE_EXCEEDED, UNAVAILABLE], initialBackoffMillis=5, maxElapsedBackoffMillis=60000, backoffMultiplier=2.0, streamingBufferSize=60, readPartialRowTimeoutMillis=60000, maxScanTimeoutRetries=3}, bulkOptions=BulkOptions{asyncMutatorCount=2, useBulkApi=true, bulkMaxKeyCount=125, bulkMaxRequestSize=1048576, autoflushMs=0, maxInflightRpcs=40, maxMemory=97307852, enableBulkMutationThrottling=false, bulkMutationRpcTargetMs=100}, callOptionsConfig=CallOptionsConfig{useTimeout=false, shortRpcTimeoutMs=60000, longRpcTimeoutMs=600000}, usePlaintextNegotiation=false, useCachedDataPool=false}.
18/07/25 10:36:20 INFO com.google.cloud.bigtable.grpc.io.OAuthCredentialsCache: Refreshing the OAuth token
Exception in thread "grpc-default-executor-0" java.lang.IllegalAccessError: tried to access field com.google.protobuf.AbstractMessage.memoizedSize from class com.google.bigtable.admin.v2.ListTablesRequest
    at com.google.bigtable.admin.v2.ListTablesRequest.getSerializedSize(ListTablesRequest.java:236)
    at io.grpc.protobuf.lite.ProtoInputStream.available(ProtoInputStream.java:108)
    at io.grpc.internal.MessageFramer.getKnownLength(MessageFramer.java:204)
    at io.grpc.internal.MessageFramer.writePayload(MessageFramer.java:136)
    at io.grpc.internal.AbstractStream.writeMessage(AbstractStream.java:52)
    at io.grpc.internal.DelayedStream$5.run(DelayedStream.java:218)
    at io.grpc.internal.DelayedStream.drainPendingCalls(DelayedStream.java:132)
    at io.grpc.internal.DelayedStream.setStream(DelayedStream.java:101)
    at io.grpc.internal.DelayedClientTransport$PendingStream.createRealStream(DelayedClientTransport.java:361)
    at io.grpc.internal.DelayedClientTransport$PendingStream.access$300(DelayedClientTransport.java:344)
    at io.grpc.internal.DelayedClientTransport$5.run(DelayedClientTransport.java:302)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

有人可以帮我这个吗?

scala hbase bigtable
1个回答
0
投票

将所罗门的评论作为答案重新发布:

io.grpc.protobuf.lite在堆栈中。 Cloud Bigtable客户端从未使用protobuf lite进行测试。依赖图会有所帮助。作为快速修复,您还可以尝试使用bigtable-hbase-1.x-shaded工件而不是bigtable-hbase-1.x工件。

您使用io.grpc.protobuf.lite可能会导致问题。据我了解,io.grpc.protobuf.lite主要用于Android客户端。

使用着色工件应该以更大的JAR大小和潜在的内存占用为代价来防止依赖性冲突。您可能还想查看这些类似的问题报告以及解决方法:

© www.soinside.com 2019 - 2024. All rights reserved.