在Spark DataFrame中居中列的简便方法

问题描述 投票:0回答:1

我想在Spark DataFrame中居中一列,即通过列的平均值减去列中的每个元素。目前,我手动完成,即首先计算列的平均值,从减少的DataFrame中获取值,然后按平均值减去列。我想知道在Spark中是否有一种简单的方法可以做到这一点?有没有内置功能呢?

apache-spark apache-spark-sql centering
1个回答
0
投票

没有内置功能,但您可以使用用户定义的函数[udf],如下所示

import org.apache.spark.sql.DataFrame

val df = spark.sparkContext.parallelize(List(
(2.06,0.56),
(1.96,0.72),
(1.70,0.87),
(1.90,0.64))).toDF("c1","c2")

def subMean(mean: Double) = udf[Double, Double]((value: Double) => value - mean)

def getCenterDF(df: DataFrame, col: String): DataFrame = {
val avg = df.select(mean(col)).first().getAs[Double](0);
df.withColumn(col, subMean(avg)(df(col)))
}

scala> df.show(false)
+----+----+
|c1  |c2  |
+----+----+
|2.06|0.56|
|1.96|0.72|
|1.7 |0.87|
|1.9 |0.64|
+----+----+

scala> getCenterDF(df, "c2").show(false)
+----+--------------------+
|c1  |c2                  |
+----+--------------------+
|2.06|-0.13750000000000007|
|1.96|0.022499999999999853|
|1.7 |0.17249999999999988 |
|1.9 |-0.05750000000000011|
+----+--------------------+
© www.soinside.com 2019 - 2024. All rights reserved.