无法使用 JDBC kafka-sink-connector 将 kafka 主题数据写入 postgres 数据库

问题描述 投票:0回答:2

我有一个 kafka 主题,我们在其中生成 avro 记录,其架构如下所示。

{
  "type": "record",
  "name": "testValue",
  "namespace": "com.test.testValue",
  "fields": [
    {
        "name": "A",
      "type": "string"
    },
    {
      "name": "B",
      "type": "string"
    },
    {
      "name": "C",
      "type": {
        "type": "bytes",
        "logicalType": "decimal",
        "precision": 8,
        "scale": 2
      }
    },
    {
      "name": "D",
      "type": {
        "type": "long",
        "logicalType": "timestamp-millis"
      }
    },
    {
      "name": "E",
      "type": [
        {
          "type": "record",
          "name": "F",
          "fields": [
            {
              "name": "G",
              "type": {
                "type": "bytes",
                "logicalType": "decimal",
                "precision": 8,
                "scale": 2
              }
            }
          ]
        },
        {
          "type": "record",
          "name": "H",
          "fields": [
            {
              "name": "dummy",
              "type": "boolean",
              "default": true
            }
          ]
        },
        {
          "type": "record",
          "name": "I",
          "fields": [
            {
              "name": "J",
              "type": {
                "type": "bytes",
                "logicalType": "decimal",
                "precision": 8,
                "scale": 2
              }
            }
          ]
        }
      ]
    },
    {
      "name": "K",
      "type": {
        "type": "long",
        "logicalType": "timestamp-millis"
      }
    },
    {
      "name": "L",
      "type": "boolean"
    }
  ]
}

我有以下连接器配置。

 {
 "name" : "test-sink-database",
 "config" : {
  "connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
  "tasks.max": "1",
  "connection.url": "jdbc:postgresql://database_url/postgres",
  "topics": "test",
  "connection.user": "postgres",
  "connection.password": "password",
  "table.name.format": "test_table",
  "auto.create": "true",
  "schema.registry.url": "http://schema-registry:8081",
  "value.converter.schema.registry.url": "http://schema-registry:8081",
  "key.converter.schema.registry.url": "http://schema-registry:8081",
  "name": "test-sink-database",
  "value.converter":"io.confluent.connect.avro.AvroConverter",
  "key.converter":"io.confluent.connect.avro.AvroConverter",
  "insert.mode":"insert"
}

出现以下错误。

Caused by: org.apache.kafka.connect.errors.ConnectException: io.confluent.connect.avro.Union (STRUCT) type doesn't have a mapping to the SQL database column type

完整堆栈跟踪

org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
    at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:560)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:321)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:224)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:192)
    at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)
    at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.kafka.connect.errors.ConnectException: io.confluent.connect.avro.Union (STRUCT) type doesn't have a mapping to the SQL database column type
    at io.confluent.connect.jdbc.dialect.GenericDatabaseDialect.getSqlType(GenericDatabaseDialect.java:1727)
    at io.confluent.connect.jdbc.dialect.PostgreSqlDatabaseDialect.getSqlType(PostgreSqlDatabaseDialect.java:215)
    at io.confluent.connect.jdbc.dialect.GenericDatabaseDialect.writeColumnSpec(GenericDatabaseDialect.java:1643)
    at io.confluent.connect.jdbc.dialect.GenericDatabaseDialect.lambda$writeColumnsSpec$33(GenericDatabaseDialect.java:1632)
    at io.confluent.connect.jdbc.util.ExpressionBuilder.append(ExpressionBuilder.java:558)
    at io.confluent.connect.jdbc.util.ExpressionBuilder$BasicListBuilder.of(ExpressionBuilder.java:597)
    at io.confluent.connect.jdbc.dialect.GenericDatabaseDialect.writeColumnsSpec(GenericDatabaseDialect.java:1634)
    at io.confluent.connect.jdbc.dialect.GenericDatabaseDialect.buildCreateTableStatement(GenericDatabaseDialect.java:1557)
    at io.confluent.connect.jdbc.sink.DbStructure.create(DbStructure.java:91)
    at io.confluent.connect.jdbc.sink.DbStructure.createOrAmendIfNecessary(DbStructure.java:61)
    at io.confluent.connect.jdbc.sink.BufferedRecords.add(BufferedRecords.java:121)
    at io.confluent.connect.jdbc.sink.JdbcDbWriter.write(JdbcDbWriter.java:66)
    at io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:74)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:538)
postgresql apache-kafka avro apache-kafka-connect confluent-platform
2个回答
0
投票

我相信这是由于字段

E
属于
record
类型。错误状态为
(STRUCT) type doesn't have a mapping to the SQL database column type
。您的
E
定义如下

"name": "E",
      "type": [
        {
          "type": "record",
          "name": "F",
          "fields": [
            {
              "name": "G",
              "type": {
                "type": "bytes",
                "logicalType": "decimal",
                "precision": 8,
                "scale": 2
              }
            }
          ]
        },
        {
          "type": "record",
          "name": "H",
          "fields": [
            {
              "name": "dummy",
              "type": "boolean",
              "default": true
            }
          ]
        },
        {
          "type": "record",
          "name": "I",
          "fields": [
            {
              "name": "J",
              "type": {
                "type": "bytes",
                "logicalType": "decimal",
                "precision": 8,
                "scale": 2
              }
            }
          ]
        }
      ]
    }

JDBC接收器无法处理嵌套良好的数据结构。您可以尝试使用 CAST 单消息转换 将其转换为字符串,并检查它是否已正确推送到数据库中。另一种选择是使用另一个 SMT 来抬高价值,例如

"transforms": "flatten",
"transforms.flatten.type": "org.apache.kafka.connect.transforms.Flatten$Value"

0
投票

我对卡夫卡相当陌生,不确定问题是否与我的类似,但我会描述我遇到的问题和解决方案。我再说一遍,我是 Kafka 新手,不确定这是否是最佳解决方案,但它解决了我面临的问题。

尝试从 Oracle 使用 Debezium 源连接器,然后使用连接到 Postgres 的接收器连接器时,出现以下异常。

The exception was io.debezium.data.VariableScaleDecimal (STRUCT) type doesn't have a mapping to the SQL database column type

在我的源连接器中添加

"decimal.handling.mode": "double"
后问题得到解决。 就我而言,Oracle 十进制类型的双精度就足够了。

© www.soinside.com 2019 - 2024. All rights reserved.