Kakfa消费者在我自己的偏移ID上的提交没有工作 - commitSync(final Map<TopicPartition, OffsetAndMetadata> offsets)

问题描述 投票:0回答:1

我正在使用poll()从一个主题中获取一堆消息(比如100条)。我在配置中把auto-commit设置为false,max.poll.records设置为100。

我将从我收到的100条消息中消耗10条消息进行处理。因此,我希望我收到的剩余90条消息成为下一个后续投票响应的一部分。

因此,我提取了第10条消息的偏移量,并作为参数传递给CommitSync()API,如下图所示。

consumer.commitSync(Collections.singletonMap(
                            new TopicPartition(message.topic(), message.partition()), 
                            new OffsetAndMetadata(message.offset() + 1, message.leaderEpoch(), "")));

然而,在接下来的投票中,我从第101条消息开始接收消息,而不是从第11条消息开始。

请帮助我在这里做错了什么,我不想在这里使用seek()方法。

我的类文件

public class KafkaMessageConsumerImpl implements IKafkaConsumer {


    private Logger logger = LoggerFactory.getLogger(KafkaMessageConsumerImpl.class);

    private final String appGroupName;

    private Map<String, String> configs;

    private KafkaConsumer<String, Message> consumer;

    private final static int  maxFetchRecords = 100;


    @Override
    public List<Message> poll(int maxMessages)throws CommunicationException {

        int totalMesssages = 0;
        try {
            logger.debug("going poll invoked with request count "+maxMessages);
            final ConsumerRecords<String, Message> messages = consumer.poll(Duration.ofMillis(Long.MAX_VALUE));
            logger.debug("poll completed with count: " +messages.count());

            final List<Message> _messages = new ArrayList<>();
            for (ConsumerRecord<String, Message> message : messages) {
                logger.debug("Received" + message.toString());

                String actorName = message.value().getMessageDetail().getActorName();
                if(actorName.equals(this.actor)) {
                    _messages.add(message.value());
                    totalMesssages++;
                }

                if(totalMesssages == maxMessages) {

                    /**
                     * commitSync is not working :(
                     */
                    consumer.commitSync(Collections.singletonMap(
                            new TopicPartition(message.topic(), message.partition()), 
                            new OffsetAndMetadata(message.offset() + 1, message.leaderEpoch(), "")));
                    break;
                }
            }
            return _messages;

        } catch (Throwable e) {
            logger.error("issue while fetching data from Kafka", e);
            throw e;
        }
    }


    private KafkaConsumer<String, Message> createConsumer(final CommunicationConfig config, 
            final String actor) throws CommunicationException {

        Properties props = new Properties();

        props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, configs.get("kafka.bootstrapServers"));

        props.put("group.id", this.appGroupName+"_"+actor);
        props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("value.deserializer", "impl.MessageDeserializer");
        props.put("enable.auto.commit", false);
        props.put("max.poll.records", maxFetchRecords);

        logger.info("consumer group {} "+this.appGroupName+"_"+actor);
        KafkaConsumer<String, Message> consumer = new KafkaConsumer<String, Message>(props);

        consumer.subscribe(Arrays.asList(this.configs.get("TOPIC_NAME")));
        return consumer;
    }

}

日志文件:-

[Thread-5] INFO org.apache.kafka.clients.consumer.ConsumerConfig - ConsumerConfig values: 
    auto.commit.interval.ms = 5000
    auto.offset.reset = latest
    bootstrap.servers = [192.168.112.219:9092]
    check.crcs = true
    client.dns.lookup = default
    client.id = 
    connections.max.idle.ms = 540000
    default.api.timeout.ms = 60000
    enable.auto.commit = false
    exclude.internal.topics = true
    fetch.max.bytes = 52428800
    fetch.max.wait.ms = 500
    fetch.min.bytes = 1
    group.id = harisurya221_dd
    heartbeat.interval.ms = 3000
    interceptor.classes = []
    internal.leave.group.on.close = true
    isolation.level = read_uncommitted
    key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
    max.partition.fetch.bytes = 1048576
    max.poll.interval.ms = 300000
    max.poll.records = 100
    metadata.max.age.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
    receive.buffer.bytes = 65536
    reconnect.backoff.max.ms = 1000
    reconnect.backoff.ms = 50
    request.timeout.ms = 30000
    retry.backoff.ms = 100
    sasl.client.callback.handler.class = null
    sasl.jaas.config = null
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.login.callback.handler.class = null
    sasl.login.class = null
    sasl.login.refresh.buffer.seconds = 300
    sasl.login.refresh.min.period.seconds = 60
    sasl.login.refresh.window.factor = 0.8
    sasl.login.refresh.window.jitter = 0.05
    sasl.mechanism = GSSAPI
    security.protocol = PLAINTEXT
    send.buffer.bytes = 131072
    session.timeout.ms = 10000
    ssl.cipher.suites = null
    ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
    ssl.endpoint.identification.algorithm = https
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.protocol = TLS
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS
    value.deserializer = impl.MessageDeserializer

[Thread-5] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 2.2.1
[Thread-5] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 55783d3133a5a49a
apache-kafka kafka-consumer-api spring-kafka kafka-producer-api
1个回答
0
投票

Kafka维护了2个指针,分别是消费者位置和提交的偏移量。

提交偏移对当前位置没有影响。

如果你关闭消费者并重新打开,你将得到你想要的结果。

如果你想从位置11重新获取,而不关闭消费者,你必须执行一个seek操作来重置位置。

© www.soinside.com 2019 - 2024. All rights reserved.