如何用PySpark替换Timedelta Pandas函数?

问题描述 投票:0回答:1

我正在PySpark中开发一个小的脚本,该脚本生成日期序列(今天的日期之前36个月)。总的来说,我成功完成了该任务]

但是借助Pandas软件包Timedelta来计算时间增量。

是否可以用Pyspark的功能替换熊猫的Timedelta?

import pandas as pd
from datetime import date, timedelta, datetime
from pyspark.sql.functions import col, date_trunc

today = datetime.today()
data = [((date(today.year, today.month, 1) - pd.Timedelta(36,'M')),date(today.year, today.month, 1))] #i whant to replace this Pandas function
df = spark.createDataFrame(data, ["minDate", "maxDate"])

+----------+----------+
|   minDate|   maxDate|
+----------+----------+
|2016-10-01|2019-10-01|
+----------+----------+

import pyspark.sql.functions as f

df = df.withColumn("monthsDiff", f.months_between("maxDate", "minDate"))\
    .withColumn("repeat", f.expr("split(repeat(',', monthsDiff), ',')"))\
    .select("*", f.posexplode("repeat").alias("date", "val"))\ #
    .withColumn("date", f.expr("add_months(minDate, date)"))\
    .select('date')\
    .show(n=50)

+----------+
|      date|
+----------+
|2016-10-01|
|2016-11-01|
|2016-12-01|
|2017-01-01|
|2017-02-01|
|2017-03-01|
 etc...
+----------+
python pyspark pyspark-sql timedelta
1个回答
0
投票

您可以轻松地使用datetimedateutil模块来完成这些任务:

© www.soinside.com 2019 - 2024. All rights reserved.