kafka集群与spring之间的事务传输消息

问题描述 投票:0回答:1

我有两个卡夫卡集群。我需要使用kafka-spring在它们之间实现一种同步。

[cluster A, topic A]  <-- [spring app] --> [cluster B, topic B]

我创建了一个anorated为@Transactional的监听器,它使用kafkaTemplate发布消息。当连接到两个集群时,这非常有效。当与目标集群的连接丢失时 - 似乎侦听器仍然确认新消息,但它们不会被发布。我在监听器上尝试了手动ack,禁用自动提交等,但似乎它们不起作用,因为我认为它们应该......当连接重新联机时,消息永远不会被传递。需要帮助。

    @KafkaListener(topics = "A", containerFactory = "syncLocalListenerFactory")
    public void consumeLocal(@Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) String key, @Payload SyncEvent message, Acknowledgment ack) {
        kafkaSyncRemoteTemplate.send("B", key, message);
        ack.acknowledge();
    }

我得到日志:

2019-04-26 12:11:40.808  WARN 21304 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient   : [Producer clientId=producer-1] Connection to node 1001 could not be established. Broker may not be available.
2019-04-26 12:11:40.828  WARN 21304 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient   : [Consumer clientId=consumer-2, groupId=app-sync] Connection to node 1001 could not be established. Broker may not be available.
2019-04-26 12:11:47.829 ERROR 21304 --- [ad | producer-1] o.s.k.support.LoggingProducerListener    : Exception thrown when sending a message with key='...' and payload='...' to topic B:

org.apache.kafka.common.errors.TimeoutException: Expiring 2 record(s) for sync-2: 30002 ms has passed since batch creation plus linger time

2019-04-26 12:11:47.829 ERROR 21304 --- [ad | producer-1] o.s.k.support.LoggingProducerListener    : Exception thrown when sending a message with key='...' and payload='...' to topic B:

org.apache.kafka.common.errors.TimeoutException: Expiring 2 record(s) for sync-2: 30002 ms has passed since batch creation plus linger time

---编辑---

这里的kafkaProperties是从application.properties文件读取的默认kafka-spring属性,但在这种情况下它们都是默认的

    @Bean
    public ConsumerFactory<String, SyncEvent> syncLocalConsumerFactory() {
        Map<String, Object> config = kafkaProperties.buildConsumerProperties();

        config.put(ConsumerConfig.GROUP_ID_CONFIG, kafkaProperties.getStreams().getApplicationId() + "-sync");
        config.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        config.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
        config.put(JsonDeserializer.VALUE_DEFAULT_TYPE, SyncEvent.class);
        config.put(JsonDeserializer.TRUSTED_PACKAGES, "app.structures");

        config.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);

        DefaultKafkaConsumerFactory<String, SyncEvent> cf = new DefaultKafkaConsumerFactory<>(config);
        cf.setValueDeserializer(new JsonDeserializer<>(SyncEvent.class, objectMapper));
        return cf;
    }

    @Bean(name = "syncLocalListenerFactory")
    public ConcurrentKafkaListenerContainerFactory<String, SyncEvent> kafkaSyncLocalListenerContainerFactory() {
        ConcurrentKafkaListenerContainerFactory<String, SyncEvent> factory = new ConcurrentKafkaListenerContainerFactory();
        factory.setConsumerFactory(syncLocalConsumerFactory());
        factory.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL_IMMEDIATE);
        factory.getContainerProperties().setAckOnError(false);
        factory.setErrorHandler(new SeekToCurrentErrorHandler(0));
        return factory;
    }
spring apache-kafka kafka-consumer-api kafka-producer-api spring-kafka
1个回答
0
投票

This website描述了如何设置错误处理程序(使用SeekToCurrentErrorHandler),它可能会帮助你。来自Spring documentation

SeekToCurrentErrorHandler:一个错误处理程序,它寻找剩余记录中每个主题的当前偏移量。用于在消息失败后重绕分区,以便可以重播它。

© www.soinside.com 2019 - 2024. All rights reserved.