Spark Scala聚合函数,用于查找组中列值的出现次数

问题描述 投票:1回答:1

我有以下数据:

group_id    id  name
----        --  ----
G1          1   apple
G1          2   orange
G1          3   apple
G1          4   banana
G1          5   apple
G2          6   orange
G2          7   apple
G2          8   apple

我想在每个组中找到唯一的出现次数。到目前为止,我已经这样做了

val group = Window.partitionBy("group_id")
newdf.withColumn("name_appeared_count", approx_count_distinct($"name").over(group))

我想要这样的结果:

group_id    id  name   name_appeared_count
----        --  ----   -------------------
G1          1   apple       3
G1          2   orange      1
G1          3   apple       3
G1          4   banana      1
G1          5   apple       3
G2          6   orange      1
G2          7   apple       2
G2          8   apple       2

提前致谢!

scala apache-spark window partition
1个回答
2
投票

方法approx_count_distinct($"name").over(group)计算每组不同的name,因此根据您的预期输出不是您想要的。在count("name")上使用partition("group_id", "name")将产生所需数量:

import org.apache.spark.sql.functions._
import org.apache.spark.sql.expressions.Window

val df = Seq(
  ("G1", 1, "apple"),
  ("G1", 2, "orange"),
  ("G1", 3, "apple"),
  ("G1", 4, "banana"),
  ("G1", 5, "apple"),
  ("G2", 6, "orange"),
  ("G2", 7, "apple"),
  ("G2", 8, "apple")
).toDF("group_id", "id", "name")

val group = Window.partitionBy("group_id", "name")

df.
  withColumn("name_appeared_count", count("name").over(group)).
  orderBy("id").
  show
// +--------+---+------+-------------------+
// |group_id| id|  name|name_appeared_count|
// +--------+---+------+-------------------+
// |      G1|  1| apple|                  3|
// |      G1|  2|orange|                  1|
// |      G1|  3| apple|                  3|
// |      G1|  4|banana|                  1|
// |      G1|  5| apple|                  3|
// |      G2|  6|orange|                  1|
// |      G2|  7| apple|                  2|
// |      G2|  8| apple|                  2|
// +--------+---+------+-------------------+
© www.soinside.com 2019 - 2024. All rights reserved.