Spark scala - 计算动态时间戳间隔

问题描述 投票:0回答:1

数据框的时间戳列(时间戳类型)称为“maxTmstmp”,另一列有小时,表示为名为“WindowHours”的整数。我想动态地减去时间戳和整数列以获得更低的时间戳。

我的数据和期望的效果(“minTmstmp”列):

+-----------+-------------------+-------------------+
|WindowHours|          maxTmstmp|          minTmstmp|
|           |                   |(maxTmstmp - Hours)|
+-----------+-------------------+-------------------+
|          1|2016-01-01 23:00:00|2016-01-01 22:00:00|
|          2|2016-03-01 12:00:00|2016-03-01 10:00:00|
|          8|2016-03-05 20:00:00|2016-03-05 12:00:00|
|         24|2016-04-12 11:00:00|2016-04-11 11:00:00|
+-----------+-------------------+-------------------+

 root
     |-- WindowHours: integer (nullable = true)
     |-- maxTmstmp: timestamp (nullable = true)

我已经找到了一个带有小时间隔解决方案的表达式,但它不是动态的。以下代码无法按预期工作。

standards.
      .withColumn("minTmstmp", $"maxTmstmp" - expr("INTERVAL 10 HOURS"))
      .show()

在Spark 2.4和scala上运行。

scala apache-spark dataframe
1个回答
2
投票

一种简单的方法是将maxTmstmp转换为unix time,在几秒钟内减去WindowHours的值,并将结果转换回Spark Timestamp,如下所示:

import java.sql.Timestamp
import org.apache.spark.sql.functions._
import spark.implicits._

val df = Seq(
  (1, Timestamp.valueOf("2016-01-01 23:00:00")),
  (2, Timestamp.valueOf("2016-03-01 12:00:00")),
  (8, Timestamp.valueOf("2016-03-05 20:00:00")),
  (24, Timestamp.valueOf("2016-04-12 11:00:00"))
).toDF("WindowHours", "maxTmstmp")

df.withColumn("minTmstmp",
    from_unixtime(unix_timestamp($"maxTmstmp") - ($"WindowHours" * 3600))
  ).show
// +-----------+-------------------+-------------------+
// |WindowHours|          maxTmstmp|          minTmstmp|
// +-----------+-------------------+-------------------+
// |          1|2016-01-01 23:00:00|2016-01-01 22:00:00|
// |          2|2016-03-01 12:00:00|2016-03-01 10:00:00|
// |          8|2016-03-05 20:00:00|2016-03-05 12:00:00|
// |         24|2016-04-12 11:00:00|2016-04-11 11:00:00|
// +-----------+-------------------+-------------------+
© www.soinside.com 2019 - 2024. All rights reserved.