Debezium的ExtractNewRecordState转换不能工作。

问题描述 投票:0回答:1

我正在构建一个数据同步器,它可以捕获MySQL源的数据变化,并将数据导出到hive。

我选择使用Kafka Connect来实现这个功能。我使用Debezium作为源连接器,Confluent hdfs作为sink连接器。

Debezium提供了一个 单一信息转换 让我提取 after 字段的复杂事件消息。我做了和文档中列出的一样的配置,但是没有成功。

{
    // omit ...
    "transform": "unwrap",
    "transform.unwrap.type": "io.debezium.transforms.ExtractNewRecordState"
}

我试着在源连接器端和汇接器端都配置了变换,但还是不行。事实上,当我在源连接器端配置它,然后检查相应主题的消息时,我发现消息仍然包含所有字段,包括 before, source,等等。

ythh@openstack2:~/confluent-5.5.0$ bin/kafka-avro-console-consumer --from-beginning --bootstrap-server localhost:9092 --topic dbserver1.test_data_1.student3
{"before":null,"after":{"dbserver1.test_data_1.student3.Value":{"id":1,"name":"ggg"}},"source":{"version":"1.1.1.Final","connector":"mysql","name":"dbserver1","ts_ms":1589005572000,"snapshot":{"string":"false"},"db":"test_data_1","table":{"string":"student3"},"server_id":1,"gtid":null,"file":"mysql-bin.000011","pos":9474,"row":0,"thread":{"long":6013},"query":null},"op":"c","ts_ms":{"long":1589005572172},"transaction":null}
{"before":null,"after":{"dbserver1.test_data_1.student3.Value":{"id":2,"name":"no way"}},"source":{"version":"1.1.1.Final","connector":"mysql","name":"dbserver1","ts_ms":1589005893000,"snapshot":{"string":"false"},"db":"test_data_1","table":{"string":"student3"},"server_id":1,"gtid":null,"file":"mysql-bin.000011","pos":11218,"row":0,"thread":{"long":6030},"query":null},"op":"c","ts_ms":{"long":1589005893773},"transaction":null}
{"before":null,"after":{"dbserver1.test_data_1.student3.Value":{"id":3,"name":"not work"}},"source":{"version":"1.1.1.Final","connector":"mysql","name":"dbserver1","ts_ms":1589005900000,"snapshot":{"string":"false"},"db":"test_data_1","table":{"string":"student3"},"server_id":1,"gtid":null,"file":"mysql-bin.000011","pos":11501,"row":0,"thread":{"long":6030},"query":null},"op":"c","ts_ms":{"long":1589005900724},"transaction":null}

我也检查了kafka的连接日志,这是一些输出。

ythh@openstack2:~/kafka_2.12-2.5.0/logs$ cat connect.log | grep transform
        transforms = []
        transforms = []
        transforms = []
        transforms = []
        transforms = []
        transforms = []
        transforms = []
        transforms = []
        transforms = []
        transforms = []
        transforms = []
        transforms = []
        transforms = []
        transforms = []
        transforms = []
        transforms = []
        transforms = []
        transforms = []
[2020-05-09 14:29:30,470] INFO    transform.unwrap.type = io.debezium.transforms.ExtractNewRecordState (io.debezium.connector.common.BaseSourceTask:97)
[2020-05-09 14:29:30,470] INFO    transform = unwrap (io.debezium.connector.common.BaseSourceTask:97)
[2020-05-09 14:29:30,471] INFO    transform.unwrap.drop.tombstones = false (io.debezium.connector.common.BaseSourceTask:97)
[2020-05-09 14:29:30,471] INFO    transform.unwrap.delete.handling.mode = rewrite (io.debezium.connector.common.BaseSourceTask:97)
        transforms = []
        transforms = []
[2020-05-09 14:29:32,419] INFO    transform.unwrap.type = io.debezium.transforms.ExtractNewRecordState (io.debezium.connector.common.BaseSourceTask:97)
[2020-05-09 14:29:32,419] INFO    transform = unwrap (io.debezium.connector.common.BaseSourceTask:97)
        transforms = []
        transforms = []
        transforms = []
        transforms = []
        transforms = []
        transforms = []
        transforms = []
        transforms = []
        transforms = []
        transforms = []
        transforms = []
        transforms = []
apache-kafka apache-kafka-connect confluent debezium
1个回答
0
投票

看起来你犯了一个错别字(transform 而不是 transforms). 试试这个配置。

{
    // omit ...
    "transforms": "unwrap",
    "transforms.unwrap.type": "io.debezium.transforms.ExtractNewRecordState"
}
© www.soinside.com 2019 - 2024. All rights reserved.