为Spark SQL中的每一分钟差异创建一个新行

问题描述 投票:0回答:2

考虑我的数据:

+---+-------------------+-------------------+
| id|          starttime|            endtime|
+---+-------------------+-------------------+
|  1|1970-01-01 07:00:00|1970-01-01 07:03:00|
+---+-------------------+-------------------+

基于此,我想要一个sql查询,该查询为结束时间和开始时间之间的每分钟差异创建一行,以使我的数据完全像这样结束:

+---+-------------------+-------------------+
| id|          starttime|            endtime|
+---+-------------------+-------------------+
|  1|1970-01-01 07:00:00|1970-01-01 07:03:00|
+---+-------------------+-------------------+
|  1|1970-01-01 07:01:00|1970-01-01 07:03:00|
+---+-------------------+-------------------+
|  1|1970-01-01 07:02:00|1970-01-01 07:03:00|
+---+-------------------+-------------------+
|  1|1970-01-01 07:03:00|1970-01-01 07:03:00|
+---+-------------------+-------------------+

我对sql有强烈的偏好,但是如果不可能的话,可以使用pyspark。

pyspark apache-spark-sql pyspark-sql
2个回答
1
投票

尝试一下:

import pyspark.sql.functions as f
df.show()
+---+-------------------+-------------------+
| id|          starttime|            endtime|
+---+-------------------+-------------------+
|  1|1970-01-01 07:00:00|1970-01-01 07:03:00|
+---+-------------------+-------------------+

#df.printSchema()
# root
# |-- id: long (nullable = true)
# |-- starttime: timestamp (nullable = true)
# |-- endtime: timestamp (nullable = true)

以一分钟为间隔的exprsequence的组合将为您提供分钟的时间戳数组,然后将explode转换为行。

df.select('id', f.explode(f.expr('sequence(starttime, endtime, interval 1 minute)')).alias('starttime'), 'endtime' ).show(truncate=False)
+---+-------------------+-------------------+
|id |starttime          |endtime            |
+---+-------------------+-------------------+
|1  |1970-01-01 07:00:00|1970-01-01 07:03:00|
|1  |1970-01-01 07:01:00|1970-01-01 07:03:00|
|1  |1970-01-01 07:02:00|1970-01-01 07:03:00|
|1  |1970-01-01 07:03:00|1970-01-01 07:03:00|
+---+-------------------+-------------------+
© www.soinside.com 2019 - 2024. All rights reserved.