在kafka-connect接收器中提取字段并解析JSON

问题描述 投票:0回答:2

我有一个 mongodb->kafka connect->elasticsearch 端到端发送数据的 kafka-connect 流程,但有效负载文档是 JSON 编码的。这是我的 mongodb 源文档。

{
  "_id": "1541527535911",
  "enabled": true,
  "price": 15.99,
  "style": {
    "color": "blue"
  },
  "tags": [
    "shirt",
    "summer"
  ]
}

这是我的 mongodb 源连接器配置:

{
  "name": "redacted",
  "config": {
    "connector.class": "com.teambition.kafka.connect.mongo.source.MongoSourceConnector",
    "databases": "redacted.redacted",
    "initial.import": "true",
    "topic.prefix": "redacted",
    "tasks.max": "8",
    "batch.size": "1",
    "key.serializer": "org.apache.kafka.common.serialization.StringSerializer",
    "value.serializer": "org.apache.kafka.common.serialization.JSONSerializer",
    "key.serializer.schemas.enable": false,
    "value.serializer.schemas.enable": false,
    "compression.type": "none",
    "mongo.uri": "mongodb://redacted:27017/redacted",
    "analyze.schema": false,
    "schema.name": "__unused__",
    "transforms": "RenameTopic",
    "transforms.RenameTopic.type":
      "org.apache.kafka.connect.transforms.RegexRouter",
    "transforms.RenameTopic.regex": "redacted.redacted_Redacted",
    "transforms.RenameTopic.replacement": "redacted"
  }
}

在elasticsearch中,它最终看起来像这样:

{
  "_index" : "redacted",
  "_type" : "kafka-connect",
  "_id" : "{\"schema\":{\"type\":\"string\",\"optional\":true},\"payload\":\"1541527535911\"}",
  "_score" : 1.0,
  "_source" : {
    "ts" : 1541527536,
    "inc" : 2,
    "id" : "1541527535911",
    "database" : "redacted",
    "op" : "i",
    "object" : "{ \"_id\" : \"1541527535911\", \"price\" : 15.99,
      \"enabled\" : true, \"tags\" : [\"shirt\", \"summer\"],
      \"style\" : { \"color\" : \"blue\" } }"
  }
}

我想使用 2 个单一消息转换:

  1. ExtractField
    抓取
    object
    ,这是一串JSON
  2. 将 JSON 解析为对象,或者只是让普通的 JSONConverter 处理它,只要它最终在 elasticsearch 中结构正确即可。

我尝试在我的接收器配置中仅使用

ExtractField
来完成此操作,但我看到 kafka 记录了此错误

kafka-connect_1       | org.apache.kafka.connect.errors.ConnectException:
Bulk request failed: [{"type":"mapper_parsing_exception",
"reason":"failed to parse", 
"caused_by":{"type":"not_x_content_exception",
"reason":"Compressor detection can only be called on some xcontent bytes or
compressed xcontent bytes"}}]

这是我的elasticsearch接收器连接器配置。在此版本中,我可以正常工作,但我必须编写自定义 ParseJson SMT。它运行良好,但如果有更好的方法或通过内置东西(转换器、SMT,任何有效的东西)的组合来做到这一点,我很乐意看到。

{
  "name": "redacted",
  "config": {
    "connector.class":
      "io.confluent.connect.elasticsearch.ElasticsearchSinkConnector",
    "batch.size": 1,
    "connection.url": "http://redacted:9200",
    "key.converter.schemas.enable": true,
    "key.converter": "org.apache.kafka.connect.storage.StringConverter",
    "schema.ignore": true,
    "tasks.max": "1",
    "topics": "redacted",
    "transforms": "ExtractFieldPayload,ExtractFieldObject,ParseJson,ReplaceId",
    "transforms.ExtractFieldPayload.type": "org.apache.kafka.connect.transforms.ExtractField$Value",
    "transforms.ExtractFieldPayload.field": "payload",
    "transforms.ExtractFieldObject.type": "org.apache.kafka.connect.transforms.ExtractField$Value",
    "transforms.ExtractFieldObject.field": "object",
    "transforms.ParseJson.type": "reaction.kafka.connect.transforms.ParseJson",
    "transforms.ReplaceId.type": "org.apache.kafka.connect.transforms.ReplaceField$Value",
    "transforms.ReplaceId.renames": "_id:id",
    "type.name": "kafka-connect",
    "value.converter": "org.apache.kafka.connect.json.JsonConverter",
    "value.converter.schemas.enable": false
  }
}
elasticsearch apache-kafka apache-kafka-connect confluent-platform
2个回答
1
投票

我不确定你的 Mongo 连接器。我不认识该类或配置...大多数人可能使用 Debezium Mongo 连接器

不过我会这样设置

"connector.class": "com.teambition.kafka.connect.mongo.source.MongoSourceConnector",

"key.serializer": "org.apache.kafka.common.serialization.StringSerializer",
"value.serializer": "org.apache.kafka.common.serialization.JSONSerializer",
"key.serializer.schemas.enable": false,
"value.serializer.schemas.enable": true,

schemas.enable
很重要,这样内部 Connect 数据类可以知道如何与其他格式相互转换。

然后,在 Sink 中,您再次需要使用 JSON DeSerializer(通过转换器),以便它创建一个完整的对象而不是纯文本字符串,如您在 Elasticsearch 中看到的那样 (

{\"schema\":{\"type\":\"string\"
)。

"connector.class":
  "io.confluent.connect.elasticsearch.ElasticsearchSinkConnector",

"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"key.converter.schemas.enable": false,
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"value.converter.schemas.enable": true

如果这不起作用,那么您可能需要提前在 Elasticsearch 中手动创建索引映射,以便它知道如何实际解析您发送的字符串


0
投票

我有和你一样的模拟器案例,我的是sql server源和elasticserach目标。 我的情况是 elasticserarch 中的 3 个双引号字符串,它期望为如下所示的 json 对象。你的解决方案如何? “_来源” : { “id”:5, "Fname" : "Tom-1692537141", “出生日期”:4141, “创建于”:1692537141503, “更新于”:1692537141503, “创建日期”:19589, “部门 ID”:3, "detail" : """{ "address": "XXXX", "block":"XX", "flat": "5"}""" }

© www.soinside.com 2019 - 2024. All rights reserved.