汇合Kafka Streams - 找不到类io.confluent.connect.avro.ConnectDefault

问题描述 投票:0回答:1

我正在使用带有查询模式的jdbc源连接器,似乎没有指定的表名,在schema-registry中为记录键和记录值注册的模式具有空模式名称,并被分配默认名称“ConnectDefault”如Confluent的AvroData类https://github.com/confluentinc/schema-registry/blob/master/avro-converter/src/main/java/io/confluent/connect/avro/AvroData.java中所定义

使用生成的avro源和SpecificAvroSerde运行Kafka Streams应用程序时,我收到错误:

Exception in thread "streams-app-6e39ebfd-db14-49bc-834f-afaf108a6d25-StreamThread-1" org.apache.kafka.streams.errors.StreamsException: Failed to deserialize value for record. topic=topic-name, partition=0, offset=0
  at org.apache.kafka.streams.processor.internals.SourceNodeRecordDeserializer.deserialize(SourceNodeRecordDeserializer.java:46)
  at org.apache.kafka.streams.processor.internals.RecordQueue.addRawRecords(RecordQueue.java:84)
  at org.apache.kafka.streams.processor.internals.PartitionGroup.addRawRecords(PartitionGroup.java:117)
  at org.apache.kafka.streams.processor.internals.StreamTask.addRecords(StreamTask.java:474)
  at org.apache.kafka.streams.processor.internals.StreamThread.addRecordsToTasks(StreamThread.java:642)
  at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:548)
  at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:519)
Caused by: org.apache.kafka.common.errors.SerializationException: Error deserializing Avro message for id 2
Caused by: org.apache.kafka.common.errors.SerializationException: Could not find class io.confluent.connect.avro.ConnectDefault specified in writer's schema whilst finding reader's schema for a SpecificRecord.

我试图使用表名作为模式名称来发布主题中的键和值模式的新版本,并删除具有\"name\":\"ConnectDefault\",\"namespace\":\"io.confluent.connect.avro\"属性但没有运气的原始版本。我是否缺少一个名为ConnectDefault的类,或者我可以在源连接器中指定一个没有命名空间的模式名称吗?

我的Kafka Streams配置:

streamsConfig.put(StreamsConfig.APPLICATION_ID_CONFIG, "streams-app");
streamsConfig.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
streamsConfig.put(AbstractKafkaAvroSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG, "http://localhost:8081");
streamsConfig.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");

我的Kafka Connect配置:

name=source
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
tasks.max=1
connection.url=jdbc:oracle:thin:
mode=incrementing
incrementing.column.name=id
query=QUERY
topic.prefix=topic-name

transforms=InsertKey, ExtractId
transforms.InsertKey.type=org.apache.kafka.connect.transforms.ValueToKey
transforms.InsertKey.fields=id
transforms.ExtractId.type=org.apache.kafka.connect.transforms.ExtractField$Key
transforms.ExtractId.field=id
key.converter=org.apache.kafka.connect.storage.StringConverter
key.converter.schemas.enable=false

value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schemas.enable=true
value.converter.schema.registry.url=http://localhost:8081
java apache-kafka-streams apache-kafka-connect confluent confluent-schema-registry
1个回答
1
投票

问题是在查询模式下,jdbc源连接器的模式名称默认为null。 https://github.com/confluentinc/kafka-connect-jdbc/issues/90

看起来这可以通过在源连接器中使用SetSchemaMetadata转换在SMT(单个消息转换)中添加模式名称来解决。 https://cwiki.apache.org/confluence/display/KAFKA/KIP-66%3A+Single+Message+Transforms+for+Kafka+Connect

© www.soinside.com 2019 - 2024. All rights reserved.