对于下面的数据集,我需要根据选定的列获取摘要数据样本数据集包含以下数据。
+---------+----------+--------+---------+
| Column1 | Column2 | Expend | Expend2 |
+---------+----------+--------+---------+
| School1 | Student1 | 5 | 10 |
+---------+----------+--------+---------+
| School1 | Student2 | 11 | 12 |
+---------+----------+--------+---------+
| School2 | Student1 | 6 | 8 |
+---------+----------+--------+---------+
| School2 | Student2 | 7 | 8 |
+---------+----------+--------+---------+
我需要获取Column2的摘要数据,如下所示,
必填格式
+---------+----------+--------+---------+
| Column1 | Column2 | Expend | Expend2 |
+---------+----------+--------+---------+
| School1 | Total | 16 | 22 |
+---------+----------+--------+---------+
| School1 | Student1 | 5 | 10 |
+---------+----------+--------+---------+
| School1 | Student2 | 11 | 12 |
+---------+----------+--------+---------+
| School2 | Total | 13 | 16 |
+---------+----------+--------+---------+
| School2 | Student1 | 6 | 8 |
+---------+----------+--------+---------+
| School2 | Student2 | 7 | 8 |
+---------+----------+--------+---------+
我尝试在数据集上使用立方体功能,但这并没有给我预期的结果。我得到null
值代替Total
这也没关系,但我没有得到上述格式的数据。
我想尝试使用dataset.cube("Column2").agg(sum("Expend1"),sum("Expend2"))
;
但是上面的代码行只给出了Column2的数据,如何使用上面的返回数据检索Column1值。
从现有的dataframe
中,您可以创建一个总数据帧,其中groupBy
Column1并将所有Expend列相加为
import org.apache.spark.sql.functions._
val totaldf = df.groupBy("Column1").agg(lit("Total").as("Column2"), sum("Expend").as("Expend"), sum("Expend2").as("Expend2"))
然后你只是merge
他们
df.union(totaldf).orderBy(col("Column1"), col("Column2").desc).show(false)
你应该有你想要的输出
+-------+--------+------+-------+
|Column1|Column2 |Expend|Expend2|
+-------+--------+------+-------+
|School1|Total |16.0 |22.0 |
|School1|Student2|11 |12 |
|School1|Student1|5 |10 |
|School2|Total |13.0 |16.0 |
|School2|Student2|7 |8 |
|School2|Student1|6 |8 |
+-------+--------+------+-------+