获取相关 ID 为 32 的元数据时出错

问题描述 投票:0回答:1

我正在使用 s3 源连接器,我正在尝试将数据从 s3 导入到自动创建的主题。我正在使用 aws msk 集群。

[Worker-001d22b042f681d7a] [2023-10-21 17:49:44,127] WARN [source-connector|task-0] [Producer clientId=connector-producer-source-connector-0] Error while fetching metadata with correlation id 1 : {source-topic=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient:1119)
[Worker-001d22b042f681d7a] [2023-10-21 17:49:44,127] INFO [source-connector|task-0] [Producer clientId=connector-producer-source-connector-0] Cluster ID: J4cTke2TRmOSsoYBO5uZdA (org.apache.kafka.clients.Metadata:279)
[Worker-001d22b042f681d7a] [2023-10-21 17:49:44,478] WARN [source-connector|task-0] [Producer clientId=connector-producer-source-connector-0] Error while fetching metadata with correlation id 3 : {source-topic=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient:1119)
[Worker-001d22b042f681d7a] [2023-10-21 17:49:44,581] WARN [source-connector|task-0] [Producer clientId=connector-producer-source-connector-0] Error while fetching metadata with correlation id 4 : {source-topic=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient:1119)

接近 s3 源连接器

connector.class=io.confluent.connect.s3.source.S3SourceConnector
useAccelerateMode=true
s3.region=us-east-1
confluent.topic.bootstrap.servers=b-4.sink.2uql3y.c4.kafka.us-east-1.amazonaws.com:90921.amazonaws.com:9092
auto.create.topics.enable=true
flush.size=7
schema.compatibility=NONE
tasks.max=2
topics=target-topic
pathStyleAccess=true
schema.enable=false
key.converter.schemas.enable=false
format.class=io.confluent.connect.s3.format.json.JsonFormat
aws.region=us-east-1
partitioner.class=io.confluent.connect.storage.partitioner.DefaultPartitioner
value.converter=org.apache.kafka.connect.storage.StringConverter
storage.class=io.confluent.connect.s3.storage.S3Storage
errors.log.enable=true
s3.bucket.name=bucket-name
key.converter=org.apache.kafka.connect.storage.StringConverter

创建默认主题 _confluence-command 后,我收到了从 s3 到新自动创建的主题的消息。

 bin/kafka-console-consumer.sh --topic _confluent-command --consumer.config /home/ec2-user/kafka_2.12-3.5.1/config/consumer.properties --from-beginning --bootstrap-server <bootstrap-server>
�
�eyJhbGciOiJub25lIn0.eyJpc3MiOiJDb25mbHVlbnQiLCJhdWQiOiJ0cmlhbCIsImV4cCI6MTcwMDUwNTg3OCwianRpIjoiamoySWFWQTV6ckVjSG94ZUw5X1dsUSIsImlhdCI6MTY5NzkxMzg3NywibmJmIjoxNjk3OTEzNzU3LCJzdWIiOiJDb25mbHVlbnQgRW50ZXJwcmlzZSIsIm1vbml0b3JpbmciOnRydWUsImxpY2Vuc2VUeXBlIjoidHJpYWwifQ.
amazon-s3 apache-kafka apache-kafka-connect aws-msk
1个回答
0
投票

_confluence-command 主题的内容是 Protobuf 序列化的,不是人类可读的,因为只有 Confluence 维护用于读取该数据的 Protobuf 架构,以确保其许可不会被绕过

© www.soinside.com 2019 - 2024. All rights reserved.