Flink 无法解析 DataStream API 中 Debezium 写入 Kafka 的 Avro schema

问题描述 投票:0回答:1

似乎没有解决方案可以自动将 Confluence SchemaRegistry 的最新 schema 与 Kafka-topic 中的数据同步。因此,我通过从 Confluence Schema Registry API 响应复制来手动添加架构。

问题是:如何使用 Flink 从 Kafka 读取 Debezium Avro 格式的数据,使用 DataStream API

我的环境:

  • Java:17.0.10 2024-01-16 LTS
  • Flink:1.18.1(我将使用 DataStream API)
  • Kafka集群:3.5.0
  • Debezium(MariaDB -> Kafka)以 Avro 格式进行序列化
  • Bitnami(融合)架构注册表

架构是:

{
    "type": "record",
    "name": "Envelope",
    "namespace": "uat.dwh.debezium.data",
    "fields": [
        {
            "name": "before",
            "type": [
                "null",
                {
                    "type": "record",
                    "name": "Value",
                    "fields": [
                        {
                            "name": "Sender",
                            "type": "int"
                        },
                        {
                            "name": "Target",
                            "type": "int"
                        },
                        {
                            "name": "SeqNum",
                            "type": "long"
                        },
                        {
                            "name": "TmSent",
                            "type": "long"
                        },
                        {
                            "name": "Action",
                            "type": {
                                "type": "int",
                                "connect.type": "int16"
                            }
                        },
                        {
                            "name": "OrderID",
                            "type": "long"
                        },
                        {
                            "name": "ClientID",
                            "type": "long"
                        },
                        {
                            "name": "SymbolID",
                            "type": "long"
                        },
                        {
                            "name": "OrderType",
                            "type": {
                                "type": "int",
                                "connect.type": "int16"
                            }
                        }
                    ],
                    "connect.name": "uat.dwh.debezium.data.Value"
                }
            ],
            "default": null
        },
        {
            "name": "after",
            "type": [
                "null",
                "Value"
            ],
            "default": null
        },
        {
            "name": "source",
            "type": {
                "type": "record",
                "name": "Source",
                "namespace": "io.debezium.connector.mysql",
                "fields": [
                    {
                        "name": "version",
                        "type": "string"
                    },
                    {
                        "name": "connector",
                        "type": "string"
                    },
                    {
                        "name": "name",
                        "type": "string"
                    },
                    {
                        "name": "ts_ms",
                        "type": "long"
                    },
                    {
                        "name": "snapshot",
                        "type": [
                            {
                                "type": "string",
                                "connect.version": 1,
                                "connect.parameters": {
                                    "allowed": "true,last,false,incremental"
                                },
                                "connect.default": "false",
                                "connect.name": "io.debezium.data.Enum"
                            },
                            "null"
                        ],
                        "default": "false"
                    },
                    {
                        "name": "db",
                        "type": "string"
                    },
                    {
                        "name": "sequence",
                        "type": [
                            "null",
                            "string"
                        ],
                        "default": null
                    },
                    {
                        "name": "table",
                        "type": [
                            "null",
                            "string"
                        ],
                        "default": null
                    },
                    {
                        "name": "server_id",
                        "type": "long"
                    },
                    {
                        "name": "gtid",
                        "type": [
                            "null",
                            "string"
                        ],
                        "default": null
                    },
                    {
                        "name": "file",
                        "type": "string"
                    },
                    {
                        "name": "pos",
                        "type": "long"
                    },
                    {
                        "name": "row",
                        "type": "int"
                    },
                    {
                        "name": "thread",
                        "type": [
                            "null",
                            "long"
                        ],
                        "default": null
                    },
                    {
                        "name": "query",
                        "type": [
                            "null",
                            "string"
                        ],
                        "default": null
                    }
                ],
                "connect.name": "io.debezium.connector.mysql.Source"
            }
        },
        {
            "name": "op",
            "type": "string"
        },
        {
            "name": "ts_ms",
            "type": [
                "null",
                "long"
            ],
            "default": null
        },
        {
            "name": "transaction",
            "type": [
                "null",
                {
                    "type": "record",
                    "name": "block",
                    "namespace": "event",
                    "fields": [
                        {
                            "name": "id",
                            "type": "string"
                        },
                        {
                            "name": "total_order",
                            "type": "long"
                        },
                        {
                            "name": "data_collection_order",
                            "type": "long"
                        }
                    ],
                    "connect.version": 1,
                    "connect.name": "event.block"
                }
            ],
            "default": null
        }
    ],
    "connect.version": 1,
    "connect.name": "uat.dwh.debezium.data.Envelope"
}

我使用以下代码启动了我的 Flink 应用程序:

package com.zfx.topbook;

import io.confluent.kafka.serializers.subject.TopicRecordNameStrategy;
import org.apache.avro.Schema;
import org.apache.avro.generic.GenericRecord;
import org.apache.flink.api.common.eventtime.WatermarkStrategy;
import org.apache.flink.api.common.serialization.DeserializationSchema;
import org.apache.flink.connector.kafka.source.KafkaSource;
import org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer;
import org.apache.flink.formats.avro.registry.confluent.ConfluentRegistryAvroDeserializationSchema;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import io.confluent.kafka.serializers.KafkaAvroDeserializerConfig;
import java.util.Properties;

public class TopBook {
public static void main(String[] args) throws Exception {

        final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
    
        Properties properties = new Properties();
        Schema avroSchema = new Schema.Parser().parse(properties.getProperty("avro.schema"));

//        DeserializationSchema<GenericRecord> deserializer = AvroDeserializationSchema.forGeneric(avroSchema);
DeserializationSchema<GenericRecord> deserializer = ConfluentRegistryAvroDeserializationSchema.forGeneric(avroSchema);

        KafkaSource<GenericRecord> source = KafkaSource.<GenericRecord>builder()
                .setBootstrapServers(properties.getProperty("bootstrap.servers"))
                .setTopics(properties.getProperty("topic"))
                .setGroupId(properties.getProperty("group.id"))
                .setStartingOffsets(OffsetsInitializer.latest())
                .setValueOnlyDeserializer(deserializer)
                .setProperties(properties)
                .build();
    
        env.fromSource(
                source,
                WatermarkStrategy.forMonotonousTimestamps(),
                "Kafka Source"
            )
            .print();
    
        env.execute();
    
    }

}

最后我得到一个错误:

Exception in thread "main" org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
    at org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
    at org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:141)
    at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:646)
    at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
    at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2147)
    at org.apache.flink.runtime.rpc.pekko.PekkoInvocationHandler.lambda$invokeRpc$1(PekkoInvocationHandler.java:268)
    at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863)
    at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841)
    at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
    at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2147)
    at org.apache.flink.util.concurrent.FutureUtils.doForward(FutureUtils.java:1267)
    at org.apache.flink.runtime.concurrent.ClassLoadingUtils.lambda$null$1(ClassLoadingUtils.java:93)
    at org.apache.flink.runtime.concurrent.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
    at org.apache.flink.runtime.concurrent.ClassLoadingUtils.lambda$guardCompletionWithContextClassLoader$2(ClassLoadingUtils.java:92)
    at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863)
    at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841)
    at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
    at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2147)
    at org.apache.flink.runtime.concurrent.pekko.ScalaFutureUtils$1.onComplete(ScalaFutureUtils.java:47)
    at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:310)
    at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307)
    at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234)
    at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231)
    at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
    at org.apache.flink.runtime.concurrent.pekko.ScalaFutureUtils$DirectExecutionContext.execute(ScalaFutureUtils.java:65)
    at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:72)
    at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:288)
    at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:288)
    at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:288)
    at org.apache.pekko.pattern.PromiseActorRef.$bang(AskSupport.scala:629)
    at org.apache.pekko.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:34)
    at org.apache.pekko.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:33)
    at scala.concurrent.Future.$anonfun$andThen$1(Future.scala:536)
    at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
    at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
    at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
    at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73)
    at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110)
    at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
    at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:85)
    at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110)
    at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59)
    at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57)
    at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)
    at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1182)
    at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1655)
    at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1622)
    at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:165)
Caused by: org.apache.flink.runtime.JobException: Recovery is suppressed by NoRestartBackoffTimeStrategy
    at org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:176)
    at org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:107)
    at org.apache.flink.runtime.scheduler.DefaultScheduler.recordTaskFailure(DefaultScheduler.java:285)
    at org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:276)
    at org.apache.flink.runtime.scheduler.DefaultScheduler.onTaskFailed(DefaultScheduler.java:269)
    at org.apache.flink.runtime.scheduler.SchedulerBase.onTaskExecutionStateUpdate(SchedulerBase.java:764)
    at org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:741)
    at org.apache.flink.runtime.scheduler.SchedulerNG.updateTaskExecutionState(SchedulerNG.java:83)
    at org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:488)
    at jdk.internal.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:568)
    at org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.lambda$handleRpcInvocation$1(PekkoRpcActor.java:309)
    at org.apache.flink.runtime.concurrent.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:83)
    at org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleRpcInvocation(PekkoRpcActor.java:307)
    at org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleRpcMessage(PekkoRpcActor.java:222)
    at org.apache.flink.runtime.rpc.pekko.FencedPekkoRpcActor.handleRpcMessage(FencedPekkoRpcActor.java:85)
    at org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleMessage(PekkoRpcActor.java:168)
    at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33)
    at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29)
    at scala.PartialFunction.applyOrElse(PartialFunction.scala:127)
    at scala.PartialFunction.applyOrElse$(PartialFunction.scala:126)
    at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29)
    at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:175)
    at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:176)
    at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:176)
    at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547)
    at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545)
    at org.apache.pekko.actor.AbstractActor.aroundReceive(AbstractActor.scala:229)
    at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590)
    at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557)
    at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280)
    at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241)
    at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253)
    ... 5 more
Caused by: java.io.IOException: Failed to deserialize consumer record due to
    at org.apache.flink.connector.kafka.source.reader.KafkaRecordEmitter.emitRecord(KafkaRecordEmitter.java:56)
    at org.apache.flink.connector.kafka.source.reader.KafkaRecordEmitter.emitRecord(KafkaRecordEmitter.java:33)
    at org.apache.flink.connector.base.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:160)
    at org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:419)
    at org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:68)
    at org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
    at org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:562)
    at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:231)
    at org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:858)
    at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:807)
    at org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:953)
    at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:932)
    at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:746)
    at org.apache.flink.runtime.taskmanager.Task.run(Task.java:562)
    at java.base/java.lang.Thread.run(Thread.java:842)
Caused by: java.io.EOFException
    at org.apache.avro.io.BinaryDecoder$InputStreamByteSource.readRaw(BinaryDecoder.java:883)
    at org.apache.avro.io.BinaryDecoder.doReadBytes(BinaryDecoder.java:405)
    at org.apache.avro.io.BinaryDecoder.readString(BinaryDecoder.java:313)
    at org.apache.avro.io.ResolvingDecoder.readString(ResolvingDecoder.java:208)
    at org.apache.avro.generic.GenericDatumReader.readString(GenericDatumReader.java:469)
    at org.apache.avro.generic.GenericDatumReader.readString(GenericDatumReader.java:459)
    at org.apache.avro.generic.GenericDatumReader.readWithoutConversion(GenericDatumReader.java:191)
    at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:160)
    at org.apache.avro.generic.GenericDatumReader.readField(GenericDatumReader.java:259)
    at org.apache.avro.generic.GenericDatumReader.readRecord(GenericDatumReader.java:247)
    at org.apache.avro.generic.GenericDatumReader.readWithoutConversion(GenericDatumReader.java:179)
    at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:160)
    at org.apache.avro.generic.GenericDatumReader.readField(GenericDatumReader.java:259)
    at org.apache.avro.generic.GenericDatumReader.readRecord(GenericDatumReader.java:247)
    at org.apache.avro.generic.GenericDatumReader.readWithoutConversion(GenericDatumReader.java:179)
    at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:160)
    at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:153)
    at org.apache.flink.formats.avro.AvroDeserializationSchema.deserialize(AvroDeserializationSchema.java:142)
    at org.apache.flink.api.common.serialization.DeserializationSchema.deserialize(DeserializationSchema.java:82)
    at org.apache.flink.connector.kafka.source.reader.deserializer.KafkaValueOnlyDeserializationSchemaWrapper.deserialize(KafkaValueOnlyDeserializationSchemaWrapper.java:51)
    at org.apache.flink.connector.kafka.source.reader.KafkaRecordEmitter.emitRecord(KafkaRecordEmitter.java:53)
    ... 14 more
apache-flink avro flink-streaming
1个回答
0
投票

这里的主要原因是错误使用了

ConfluentRegistryAvroDeserializationSchema
,而没有指定模式注册表的URL(我仍然不知道为什么我们不能只使用
Schema
类型的模式)。

然后,融合模式注册表中的模式 ID 出现了一些混乱(由于某些原因,模式没有通过 Kafka 消息包含的 ID 注册)。

修复这 2 个员工后,我最终可以以 Confluence Schema Registry Avro 格式反序列化来自 Kafka 的记录。

固定解决方案的代码是:

public class TopBook {
    public static void main(String[] args) throws Exception {

        final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();

        Properties properties = getProperties();
        Schema avroSchema = new Schema.Parser().parse(properties.getProperty("avro.schema.row"));
        DeserializationSchema<GenericRecord> deserializer = ConfluentRegistryAvroDeserializationSchema.forGeneric(avroSchema, properties.getProperty("schema.registry.url"));

        KafkaSource<GenericRecord> source = KafkaSource.<GenericRecord>builder()
                .setBootstrapServers(properties.getProperty("bootstrap.servers"))
                .setTopics(properties.getProperty("topic"))
                .setGroupId(properties.getProperty("group.id"))
                .setStartingOffsets(OffsetsInitializer.latest())
                .setValueOnlyDeserializer(deserializer)
                .setProperties(properties)
                .build();

        env
                .fromSource(
                    source,
                    WatermarkStrategy.forMonotonousTimestamps(),
                    "Kafka Source"
                )
                .print();

        env.execute();

    }
}
© www.soinside.com 2019 - 2024. All rights reserved.