此问题与我之前的问题有关。pyspark dataframe aggregate a column by sliding time window
但是,我想创建一个帖子,以澄清上一个问题中缺少的一些关键点。
原始数据框:
client_id value1 name1 a_date
dhd 561 ecdu 2019-10-8
dhd 561 tygp 2019-10-8
dhd 561 rdsr 2019-10-8
dhd 561 rgvd 2019-8-12
dhd 561 bhnd 2019-8-12
dhd 561 prti 2019-8-12
dhd 561 teuq 2019-5-7
dhd 561 wnva 2019-5-7
dhd 561 pqhn 2019-5-7
我需要为每个“ client_id”,每个“ value1”以及某些给定的滑动时间窗口找到“ name1”的值。
我定义了一个窗口函数:
w = window().partitionBy("client_id", "value1").orderBy("a_date")
但是我不知道如何为窗口大小1、2、6、9、12选择“ name1”的值。
此处,窗口大小表示从当前日期“ a_date”开始的月份长度。
例如
client_id value1 names1_within_window_size_1 names1_within_window_size_2
dhd 561 [ecdu,tygp,rdsr] [ecdu,tygp,rdsr]
names1_within_window_size_6
[ecdu,tygp,rdsr, rgvd,bhnd,prti, teuq, wnva,pqhn ]
names1_within_window_size_1 : the month window 2019-10
names1_within_window_size_2 : the month window 2019-10 and 2019-9 (no data in 2019-9 so just keep the data from 2019-10)
names1_within_window_size_6 : the month window 2019-10 and 2019-9 (no data in 2019-9 so just keep the data from 2019-10) but there are data in 2019-8
谢谢
我从您以前的问题中窃取了数据,因为我自己懒得自己做,而且有人在那儿精心设计了输入数据列表。
[当窗口滑过记录数而不是月数时,我将给定月份的所有记录(当然由client_id
和value1
分组)合并到.groupBy("client_id", "value1", "year_val", "month_val")
中的单个记录中,该记录存在于计算df2
from pyspark.sql import functions as F
from pyspark.sql.window import Window
data= [['dhd',589,'ecdu','2020-1-5'],
['dhd',575,'tygp','2020-1-5'],
['dhd',821,'rdsr','2020-1-5'],
['dhd',872,'rgvd','2019-12-1'],
['dhd',619,'bhnd','2019-12-15'],
['dhd',781,'prti','2019-12-18'],
['dhd',781,'prti1','2019-12-18'],
['dhd',781,'prti2','2019-11-18'],
['dhd',781,'prti3','2019-10-31'],
['dhd',781,'prti4','2019-09-30'],
['dhd',781,'prt1','2019-07-31'],
['dhd',781,'pr4','2019-06-30'],
['dhd',781,'pr2','2019-08-31'],
['dhd',781,'prt4','2019-01-31'],
['dhd',781,'prti6','2019-02-28'],
['dhd',781,'prti7','2019-02-02'],
['dhd',781,'prti8','2019-03-29'],
['dhd',781,'prti9','2019-04-29'],
['dhd',781,'prti10','2019-05-04'],
['dhd',781,'prti11','2019-03-01']]
columns= ['client_id','value1','name1','a_date']
df= spark.createDataFrame(data,columns)
df2 = df.withColumn("year_val", F.year("a_date"))\
.withColumn("month_val", F.month("a_date"))\
.groupBy("client_id", "value1", "year_val", "month_val")\
.agg(F.concat_ws(", ", F.collect_list("name1")).alias("init_list"))
df2.show()
[这里,我们得到init_list
为:
+---------+------+--------+---------+-------------+
|client_id|value1|year_val|month_val| init_list|
+---------+------+--------+---------+-------------+
| dhd| 781| 2019| 12| prti, prti1|
| dhd| 589| 2020| 1| ecdu|
| dhd| 781| 2019| 8| pr2|
| dhd| 781| 2019| 3|prti8, prti11|
| dhd| 575| 2020| 1| tygp|
| dhd| 781| 2019| 5| prti10|
| dhd| 781| 2019| 9| prti4|
| dhd| 781| 2019| 11| prti2|
| dhd| 781| 2019| 10| prti3|
| dhd| 821| 2020| 1| rdsr|
| dhd| 781| 2019| 6| pr4|
| dhd| 619| 2019| 12| bhnd|
| dhd| 781| 2019| 7| prt1|
| dhd| 781| 2019| 4| prti9|
| dhd| 781| 2019| 1| prt4|
| dhd| 781| 2019| 2| prti6, prti7|
| dhd| 872| 2019| 12| rgvd|
+---------+------+--------+---------+-------------+
使用此方法,我们只需在记录上运行窗口即可获得最终结果:
month_range = 6
w = Window().partitionBy("client_id", "value1")\
.orderBy("month_val")\
.rangeBetween(-(month_range+1),0)
df3 = df2.withColumn("last_0_month", F.collect_list(F.col("init_list")).over(w))\
.orderBy("value1", "year_val", "month_val")
df3.show(100,False)
哪个给我们:
+---------+------+--------+---------+-------------+-------------------------------------------------------------------+
|client_id|value1|year_val|month_val|init_list |last_0_month |
+---------+------+--------+---------+-------------+-------------------------------------------------------------------+
|dhd |575 |2020 |1 |tygp |[tygp] |
|dhd |589 |2020 |1 |ecdu |[ecdu] |
|dhd |619 |2019 |12 |bhnd |[bhnd] |
|dhd |781 |2019 |1 |prt4 |[prt4] |
|dhd |781 |2019 |2 |prti6, prti7 |[prt4, prti6, prti7] |
|dhd |781 |2019 |3 |prti8, prti11|[prt4, prti6, prti7, prti8, prti11] |
|dhd |781 |2019 |4 |prti9 |[prt4, prti6, prti7, prti8, prti11, prti9] |
|dhd |781 |2019 |5 |prti10 |[prt4, prti6, prti7, prti8, prti11, prti9, prti10] |
|dhd |781 |2019 |6 |pr4 |[prt4, prti6, prti7, prti8, prti11, prti9, prti10, pr4] |
|dhd |781 |2019 |7 |prt1 |[prt4, prti6, prti7, prti8, prti11, prti9, prti10, pr4, prt1] |
|dhd |781 |2019 |8 |pr2 |[prt4, prti6, prti7, prti8, prti11, prti9, prti10, pr4, prt1, pr2] |
|dhd |781 |2019 |9 |prti4 |[prti6, prti7, prti8, prti11, prti9, prti10, pr4, prt1, pr2, prti4]|
|dhd |781 |2019 |10 |prti3 |[prti8, prti11, prti9, prti10, pr4, prt1, pr2, prti4, prti3] |
|dhd |781 |2019 |11 |prti2 |[prti9, prti10, pr4, prt1, pr2, prti4, prti3, prti2] |
|dhd |781 |2019 |12 |prti, prti1 |[prti10, pr4, prt1, pr2, prti4, prti3, prti2, prti, prti1] |
|dhd |821 |2020 |1 |rdsr |[rdsr] |
|dhd |872 |2019 |12 |rgvd |[rgvd] |
+---------+------+--------+---------+-------------+-------------------------------------------------------------------+
限制:
[不幸的是,第二部分丢失了a_date
字段,并且对于在其上定义了范围的滑动窗口操作,orderBy
无法指定多列(请注意,窗口定义中的orderBy
仅位于[C0 ])。因此,这种精确的解决方案不适用于跨多年的数据。但是,可以通过将诸如month_id之类的内容作为单个列来组合年和月的值,然后在month_val
子句中使用它来轻松克服。
如果要有多个窗口,可以将orderBy
转换为列表,并在最后一个代码片段中对其进行循环以覆盖所有范围。
尽管最后一列(month_range
)看起来像一个数组,但它包含来自先前last_0_month
操作的逗号分隔的字符串。您可能还需要清理它。