计算pyspark数据帧的百分比

问题描述 投票:1回答:4

我有一个来自泰坦尼克数据的pyspark数据框,我已粘贴下面的副本。如何添加每个桶的百分比列?

enter image description here

谢谢您的帮助!

apache-spark pyspark spark-dataframe
4个回答
6
投票

首先是带有输入数据的文字DataFrame:

import findspark
findspark.init()
from pyspark.sql import SparkSession

spark = SparkSession.builder.master("local").appName("test").getOrCreate()
df = spark.createDataFrame([
    (1,'female',233),
    (None,'female',314),
    (0,'female',81),
    (1, None, 342), 
    (1, 'male', 109),
    (None, None, 891),
    (0, None, 549),
    (None, 'male', 577),
    (0, None, 468)
    ], 
    ['survived', 'sex', 'count'])

然后我们使用窗口函数来计算包含完整行集的分区的计数总和(实际上是总计数):

import pyspark.sql.functions as f
from pyspark.sql.window import Window
df = df.withColumn('percent', f.col('count')/f.sum('count').over(Window.partitionBy()))
df.orderBy('percent', ascending=False).show()

+--------+------+-----+--------------------+
|survived|   sex|count|             percent|
+--------+------+-----+--------------------+
|    null|  null|  891|                0.25|
|    null|  male|  577| 0.16189674523007858|
|       0|  null|  549| 0.15404040404040403|
|       0|  null|  468| 0.13131313131313133|
|       1|  null|  342| 0.09595959595959595|
|    null|female|  314| 0.08810325476992144|
|       1|female|  233|  0.0653759820426487|
|       1|  male|  109| 0.03058361391694725|
|       0|female|   81|0.022727272727272728|
+--------+------+-----+--------------------+

如果我们将上面的步骤分成两个,则更容易看出窗函数sum只是为每一行添加相同的total

df = df\
  .withColumn('total', f.sum('count').over(Window.partitionBy()))\
  .withColumn('percent', f.col('count')/f.col('total'))
df.show()

+--------+------+-----+--------------------+-----+
|survived|   sex|count|             percent|total|
+--------+------+-----+--------------------+-----+
|       1|female|  233|  0.0653759820426487| 3564|
|    null|female|  314| 0.08810325476992144| 3564|
|       0|female|   81|0.022727272727272728| 3564|
|       1|  null|  342| 0.09595959595959595| 3564|
|       1|  male|  109| 0.03058361391694725| 3564|
|    null|  null|  891|                0.25| 3564|
|       0|  null|  549| 0.15404040404040403| 3564|
|    null|  male|  577| 0.16189674523007858| 3564|
|       0|  null|  468| 0.13131313131313133| 3564|
+--------+------+-----+--------------------+-----+

0
投票

类似下面的东西应该工作。

df = sc.parallelize([(1,'female',233), (None,'female',314),(0,'female',81),(1, None, 342), (1, 'male', 109)]).toDF().withColumnRenamed("_1","survived").withColumnRenamed("_2","sex").withColumnRenamed("_3","count")
total = df.select("count").agg({"count": "sum"}).collect().pop()['sum(count)']
result = df.withColumn('percent', (df['count']/total) * 100)
result.show()

+--------+------+-----+------------------+
|survived|   sex|count|           percent|
+--------+------+-----+------------------+
|       1|female|  233| 21.59406858202039|
|    null|female|  314|29.101019462465246|
|       0|female|   81| 7.506950880444857|
|       1|  null|  342| 31.69601482854495|
|       1|  male|  109|10.101946246524559|
+--------+------+-----+------------------+

0
投票

您需要: - 计算总和 - 创建UDF以查找百分比 - 并为结果添加一列。


0
投票

假设你有df列a,b,c,d,你需要找到相应列总数的百分比。这是你如何做到这一点。这比窗口函数更快:)

import pyspark.sql.functions as fn

divideDF = df.agg(fn.sum('a').alias('a1'),
                 fn.sum('b').alias('b1'),
                 fn.sum('c').alias('c1'),
                 fn.sum('d').alias('d1'))

divideDF=divideDF.take(1)
a1=divideDF[0]['a1']
b1=divideDF[0]['b1']
c1=divideDF[0]['c1']
d1=divideDF[0]['d1']

df=df.withColumn('a_percentage', fn.lit(100)*(fn.col('a')/fn.lit(a1)))
df=df.withColumn('b_percentage', fn.lit(100)*(fn.col('b')/fn.lit(b1)))
df=df.withColumn('c_percentage', fn.lit(100)*(fn.col('c')/fn.lit(c1)))
df=df.withColumn('d_percentage', fn.lit(100)*(fn.col('d')/fn.lit(d1)))

df.show()

请享用!

© www.soinside.com 2019 - 2024. All rights reserved.