为什么此KStream / KTable拓扑会传播未通过过滤器的记录?

问题描述 投票:2回答:1

我有以下拓扑:

  1. 创建一个州商店
  2. 根据SOME_CONDITION过滤记录,将其值映射到新实体,最后将这些记录发布到另一个主题STATIONS_LOW_CAPACITY_TOPIC

但是我在STATIONS_LOW_CAPACITY_TOPIC上看到了这个:

�   null
�   null
�   null
�   {"id":140,"latitude":"40.4592351","longitude":"-3.6915330",...}
�   {"id":137,"latitude":"40.4591366","longitude":"-3.6894151",...}
�   null

也就是说,就好像它还向STATIONS_LOW_CAPACITY_TOPIC主题发布那些没有通过过滤器的记录。这怎么可能?如何防止它们被发布?

这是ksteams代码:

kStream.groupByKey().reduce({ _, newValue -> newValue },
                Materialized.`as`<Int, Station, KeyValueStore<Bytes, ByteArray>>(STATIONS_STORE)
                        .withKeySerde(Serdes.Integer())
                        .withValueSerde(stationSerde))
                .filter { _, value -> SOME_CONDITION }
                .mapValues { station ->
                    Stats(XXX)
                }
                .toStream().to(STATIONS_LOW_CAPACITY_TOPIC, Produced.with(Serdes.Integer(), stationStatsSerde))

更新:我已简化为拓扑并打印结果表。由于某种原因,最终的表还包含与未通过过滤器的上游记录相对应的空值记录:

kStream.groupByKey().reduce({ _, newValue -> newValue },
                Materialized.`as`<Int, BiciMadStation, KeyValueStore<Bytes, ByteArray>>(STATIONS_STORE)
                        .withKeySerde(Serdes.Integer())
                        .withValueSerde(stationSerde))
                .filter { _, value ->
                    val conditionResult = (SOME_CONDITION)
                    println(conditionResult)
                    conditionResult
                }
                .print()

日志:

false
[KTABLE-FILTER-0000000002]: 1, (null<-null)
false
[KTABLE-FILTER-0000000002]: 2, (null<-null)
false
[KTABLE-FILTER-0000000002]: 3, (null<-null)
false
[KTABLE-FILTER-0000000002]: 4, (null<-null)
true
[KTABLE-FILTER-0000000002]: 5, (Station(id=5, latitude=40.4285524, longitude=-3.7025875, ...)<-null)
apache-kafka apache-kafka-streams
1个回答
3
投票

答案是在KTable.filter(...)的javadoc中:

请注意,更改日志流的过滤器与记录流过滤器的工作方式不同,因为具有空值的记录(所谓的逻辑删除记录)具有删除语义。因此,对于墓碑,不评估所提供的过滤器谓词,但是如果需要,则直接转发墓碑记录(即,如果有任何要删除的话)。此外,对于每个丢弃的记录(即,点不满足给定的谓词),转发墓碑记录。

这就解释了为什么我看到下游发送的空值(墓碑)记录。

为了避免它,我将KTable转换为KStream,然后应用过滤器:

kStream.groupByKey().reduce({ _, newValue -> newValue },
                Materialized.`as`<Int, Stations, KeyValueStore<Bytes, ByteArray>>(STATIONS_STORE)
                        .withKeySerde(Serdes.Integer())
                        .withValueSerde(stationSerde))
                .toStream()
                .filter { _, value -> SOME_CONDITION }
                .mapValues { station ->
                    StationStats(station.id, station.latitude, station.longitude, ...)
                }
                .to(STATIONS_LOW_CAPACITY_TOPIC, Produced.with(Serdes.Integer(), stationStatsSerde))

结果:

4   {"id":4,"latitude":"40.4302937","longitude":"-3.7069171",...}
5   {"id":5,"latitude":"40.4285524","longitude":"-3.7025875",...}
...
© www.soinside.com 2019 - 2024. All rights reserved.