尝试为主题分区滚动新的日志段..低于活动段的起始偏移量

问题描述 投票:0回答:1

我从卡夫卡收到一条我不知道的错误消息。谁能提供一个“翻译”以及如何摆脱它,谢谢。

[2022-10-22 20:04:41,299] ERROR [ReplicaManager broker=0] Error processing append operation on partition __consumer_offsets-39 (kafka.server.ReplicaManager)
org.apache.kafka.common.KafkaException: Trying to roll a new log segment for topic partition __consumer_offsets-39 with start offset 33072332448 =max(provided offset = Some(33072332448), LEO = 33072332448) lower than start offset of the active segment LogSegment(baseOffset=33249561480, size=160, lastModifiedTime=1666458058000, largestRecordTimestamp=Some(166
apache-kafka kafka-consumer-api
1个回答
0
投票

我也遇到了同样的问题,背景:我将Kafka从2.0升级到2.8,上线运行一天后,开始出现这个错误,并且错误一直持续到现在。我不知道该怎么办?也许我们应该尝试重新启动代理。 错误信息如下:

[2023-09-07 17:03:44,689] INFO [ProducerStateManager partition=__consumer_offsets-9] Writing producer snapshot at offset 630856647 (kafka.log.ProducerStateManager)
[2023-09-07 17:03:44,706] INFO [Log partition=__consumer_offsets-9, dir=/usr/local/kafka/kafka-logs] Rolled new log segment at offset 630856647 in 19 ms. (kafka.log.Log)
[2023-09-07 17:04:04,953] INFO [Log partition=__consumer_offsets-9, dir=/usr/local/kafka/kafka-logs] Splitting overflowed segment LogSegment(baseOffset=629826838, size=104857521, lastModifiedTime=1694077424000, largestRecordTimestamp=Some(1694077424203)) (kafka.log.Log)
[2023-09-07 17:04:10,157] INFO [Log partition=__consumer_offsets-9, dir=/usr/local/kafka/kafka-logs] Replacing overflowed segment LogSegment(baseOffset=629826838, size=104857521, lastModifiedTime=1694077424000, largestRecordTimestamp=Some(1694077424203)) with split segments ListBuffer(LogSegment(baseOffset=629826838, size=99608435, lastModifiedTime=1694077424000, largestRecordTimestamp=Some(1694071460696)), LogSegment(baseOffset=4265153878, size=289, lastModifiedTime=1694077424000, largestRecordTimestamp=Some(1694071469391)), LogSegment(baseOffset=4254638006, size=289, lastModifiedTime=1694077424000, largestRecordTimestamp=Some(1694071469417)), LogSegment(baseOffset=4247696294, size=2357, lastModifiedTime=1694077424000, largestRecordTimestamp=Some(1694071484399)), LogSegment(baseOffset=4244831254, size=2950, lastModifiedTime=1694077424000, largestRecordTimestamp=Some(1694071499464)), LogSegment(baseOffset=4243152598, size=9991, lastModifiedTime=1694077424000, largestRecordTimestamp=Some(1694071509577)), LogSegment(baseOffset=4241842326, size=578, lastModifiedTime=1694077424000, largestRecordTimestamp=Some(1694071509592)), LogSegment(baseOffset=4241019590, size=15906, lastModifiedTime=1694077424000, largestRecordTimestamp=Some(1694071509941)), LogSegment(baseOffset=4238524694, size=23495, lastModifiedTime=1694077424000, largestRecordTimestamp=Some(1694071531577)), LogSegment(baseOffset=4196619230, size=2342, lastModifiedTime=1694077424000, largestRecordTimestamp=Some(1694071534541)), LogSegment(baseOffset=4188973446, size=19404, lastModifiedTime=1694077424000, largestRecordTimestamp=Some(1694071556552)), LogSegment(baseOffset=4180801934, size=4714, lastModifiedTime=1694077424000, largestRecordTimestamp=Some(1694071561590)), LogSegment(baseOffset=4170124022, size=289, lastModifiedTime=1694077424000, largestRecordTimestamp=Some(1694071562467)), LogSegment(baseOffset=4143101662, size=289, lastModifiedTime=1694077424000, largestRecordTimestamp=Some(1694071562552)), LogSegment(baseOffset=4141155326, size=2068, lastModifiedTime=1694077424000, largestRecordTimestamp=Some(1694071564592)), LogSegment(baseOffset=4138444262, size=289, lastModifiedTime=1694077424000, largestRecordTimestamp=Some(1694071565469)), LogSegment(baseOffset=4120643382, size=14112, lastModifiedTime=1694077424000, largestRecordTimestamp=Some(1694071581488)), LogSegment(baseOffset=4116647966, size=12941, lastModifiedTime=1694077424000, largestRecordTimestamp=Some(1694071595624)), LogSegment(baseOffset=4111382614, size=82908, lastModifiedTime=1694077424000, largestRecordTimestamp=Some(1694071689985)), LogSegment(baseOffset=4057707254, size=22050, lastModifiedTime=1694077424000, largestRecordTimestamp=Some(1694071715010)), LogSegment(baseOffset=4051362438, size=304, lastModifiedTime=1694077424000, largestRecordTimestamp=Some(1694071715681)), LogSegment(baseOffset=4030865574, size=123480, lastModifiedTime=1694077424000, largestRecordTimestamp=Some(1694071855786)), LogSegment(baseOffset=4020661502, size=289, lastModifiedTime=1694077424000, largestRecordTimestamp=Some(1694071856080)), LogSegment(baseOffset=3999020742, size=1193939, lastModifiedTime=1694077424000, largestRecordTimestamp=Some(1694073210753)), LogSegment(baseOffset=3949328830, size=304, lastModifiedTime=1694077424000, largestRecordTimestamp=Some(1694073211506)), LogSegment(baseOffset=3895224534, size=994607, lastModifiedTime=1694077424000, largestRecordTimestamp=Some(1694074340277)), LogSegment(baseOffset=3834108334, size=289, lastModifiedTime=1694077424000, largestRecordTimestamp=Some(1694074340603)), LogSegment(baseOffset=3786670166, size=2357, lastModifiedTime=1694077424000, largestRecordTimestamp=Some(1694074343280)), LogSegment(baseOffset=3752294334, size=4699, lastModifiedTime=1694077424000, largestRecordTimestamp=Some(1694074348605)), LogSegment(baseOffset=3737104878, size=2646, lastModifiedTime=1694077424000, largestRecordTimestamp=Some(1694074351604)), LogSegment(baseOffset=3727882398, size=2357, lastModifiedTime=1694077424000, largestRecordTimestamp=Some(1694074354285)), LogSegment(baseOffset=3722859694, size=2342, lastModifiedTime=1694077424000, largestRecordTimestamp=Some(1694074356800)), LogSegment(baseOffset=3714745974, size=304, lastModifiedTime=1694077424000, largestRecordTimestamp=Some(1694074357286)), LogSegment(baseOffset=3649810502, size=2703908, lastModifiedTime=1694077424000, largestRecordTimestamp=Some(1694077424203))) (kafka.log.Log)
[2023-09-07 17:04:10,172] INFO Failed to delete log /usr/local/kafka/kafka-logs/__consumer_offsets-9/00000000000629826838.log.cleaned because it does not exist. (kafka.log.LogSegment)
[2023-09-07 17:04:10,172] INFO Failed to delete offset index /usr/local/kafka/kafka-logs/__consumer_offsets-9/00000000000629826838.index.cleaned because it does not exist. (kafka.log.LogSegment)
[2023-09-07 17:04:10,185] INFO Failed to delete time index /usr/local/kafka/kafka-logs/__consumer_offsets-9/00000000000629826838.timeindex.cleaned because it does not exist. (kafka.log.LogSegment)
[2023-09-07 17:04:10,235] ERROR [ReplicaManager broker=0] Error processing append operation on partition __consumer_offsets-9 (kafka.server.ReplicaManager)
org.apache.kafka.common.KafkaException: Trying to roll a new log segment for topic partition __consumer_offsets-9 with start offset 630856878 =max(provided offset = Some(630856878), LEO = 630856878) lower than start offset of the active segment LogSegment(baseOffset=4265153878, size=289, lastModifiedTime=1694077424000, largestRecordTimestamp=Some(1694071469391))
        at kafka.log.Log.$anonfun$roll$2(Log.scala:2055)
        at kafka.log.Log.roll(Log.scala:2482)
        at kafka.log.Log.maybeRoll(Log.scala:2017)
        at kafka.log.Log.$anonfun$append$2(Log.scala:1292)
        at kafka.log.Log.append(Log.scala:2482)
        at kafka.log.Log.appendAsLeader(Log.scala:1138)
        at kafka.cluster.Partition.$anonfun$appendRecordsToLeader$1(Partition.scala:1068)
        at kafka.cluster.Partition.appendRecordsToLeader(Partition.scala:1056)
        at kafka.server.ReplicaManager.$anonfun$appendToLocalLog$6(ReplicaManager.scala:958)
        at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286)
 at scala.collection.immutable.Map$Map1.foreach(Map.scala:192)
        at scala.collection.TraversableLike.map(TraversableLike.scala:286)
        at scala.collection.TraversableLike.map$(TraversableLike.scala:279)
        at scala.collection.AbstractTraversable.map(Traversable.scala:108)
        at kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:946)
        at kafka.server.ReplicaManager.appendRecords(ReplicaManager.scala:616)
        at kafka.coordinator.group.GroupMetadataManager.storeOffsets(GroupMetadataManager.scala:328)
        at kafka.coordinator.group.GroupCoordinator.$anonfun$doCommitOffsets$1(GroupCoordinator.scala:780)
        at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
        at kafka.coordinator.group.GroupMetadata.inLock(GroupMetadata.scala:228)
        at kafka.coordinator.group.GroupCoordinator.handleCommitOffsets(GroupCoordinator.scala:758)
        at kafka.server.KafkaApis.handleOffsetCommitRequest(KafkaApis.scala:519)
        at kafka.server.KafkaApis.handle(KafkaApis.scala:175)
        at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:74)
        at java.lang.Thread.run(Thread.java:745)
© www.soinside.com 2019 - 2024. All rights reserved.