我正在使用spring-kafka 2.1.10.RELEASE。我有一个带有下一个属性的消费者(几乎复制了所有属性):
auto.commit.interval.ms = 5000
auto.offset.reset = earliest
bootstrap.servers = [kafka1.local:9093, kafka2.local:9093, kafka3.local:9093]
check.crcs = true
client.id = kafkaListener-0
connections.max.idle.ms = 540000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = kafkaLisneterContainer
heartbeat.interval.ms = 3000
interceptor.classes = null
internal.leave.group.on.close = true
isolation.level = read_uncommitted
max.poll.interval.ms = 300000
max.poll.records = 50
metadata.max.age.ms = 300000
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 305000
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
session.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
我生产的Apache Kafka版本是2.11-1.0.0-0pan4。里面有一个包含3个kafka节点的集群:
面临严重问题,甚至无法在本地重现。这就是发生的事情:
2019-01-17 / controller.2019-01-17-03.aaa-aa3.gz:2019-01-17 06:47:39,365 +0000 [controller-event-thread] [kafka.controller.KafkaController] INFO [ Controller id = 3]分区topic_name-0的新领导者和ISR是{“leader”:1,“leader_epoch”:3,“isr”:[1,3]}(kafka.controller.KafkaController)
最极端的问题:在应用程序中,一切都很好 - 一个正常工作。 Spring-consumer读取新消息并将它们发送到kafka。我看到这样的日志。看起来像spring consumer在内存中保存它的偏移并将提交发送到远程kafka(没有错误等):
2019-01-23 14:03:20,975 +0000 [kafkaLisneterContainer-0-C-1] [Fetcher] DEBUG [Consumer clientId = kafkaListener-0,groupId = kafkaLisneterContainer]在偏移量164871处获取READ_UNCOMMITTED以获取分区aaa-1返回的获取数据(error = NONE,highWaterMark = 164871,lastStableOffset = -1,logStartOffset = 116738,abortedTransactions = null,recordsSizeInBytes = 0)2019-01-23 14:03:20,975 +0000 [externalbetting] [kafkaLisneterContainer-0-C-1] [Fetcher] DEBUG [Consumer clientId = kafkaListener-0,groupId = kafkaLisneterContainer]在偏移164871处将分区eaaa-1的READ_UNCOMMITTED获取请求添加到节点aaa-aa1.local:9093( id:1 rack:null)2019-01-23 14:03:20,975
5)但无论如何,Apache kafka的滞后增长。如果我重新启动我的应用程序,将重新创建spring bean使用者,并将丢失其内存中保存的偏移量。它将从kafka读取Lag并且进行第二次记录。
请帮忙找到钥匙!