我在下面的数据框中有值。我想在列ID中添加下一个概念性ID,该ID在本质上必须是唯一的以及递增的。
+----------------+----+--------------------+
|local_student_id| id| last_updated|
+----------------+----+--------------------+
| 610931|null| null|
| 599768| 3|2020-02-26 15:47:...|
| 633719|null| null|
| 612949| 2|2020-02-26 15:47:...|
| 591819| 1|2020-02-26 15:47:...|
| 595539| 4|2020-02-26 15:47:...|
| 423287|null| null|
| 641322| 5|2020-02-26 15:47:...|
+----------------+----+--------------------+
我想要低于预期的输出。有人能麻我吗?我是Pyspark的新手。并且还想在last_updated列中添加当前时间戳。
+----------------+----+--------------------+
|local_student_id| id| last_updated|
+----------------+----+--------------------+
| 610931| 6|2020-02-26 16:00:...|
| 599768| 3|2020-02-26 15:47:...|
| 633719| 7|2020-02-26 16:00:...|
| 612949| 2|2020-02-26 15:47:...|
| 591819| 1|2020-02-26 15:47:...|
| 595539| 4|2020-02-26 15:47:...|
| 423287| 8|2020-02-26 16:00:...|
| 641322| 5|2020-02-26 15:47:...|
+----------------+----+--------------------+
实际上我尝试过
final_data = final_data.withColumn(
'id', when(col('id').isNull(), row_number() + max(col('id'))).otherwise(col('id')))
但是它给出以下错误:-
: org.apache.spark.sql.AnalysisException: grouping expressions sequence is empty, and '`local_student_id`' is not an aggregate function. Wrap '(CASE WHEN (`id` IS NULL) THEN (CAST(row_number() AS BIGINT) + max(`id`)) ELSE `id` END AS `id`)' in windowing function(s) or wrap '`local_student_id`' in first() (or first_value) if you don't care which value you get.;;
这是您需要的代码:
from pyspark.sql import functions as F, Window
max_id = final_data.groupBy().max("id").collect()[0][0]
final_data.withColumn(
"id",
F.coalesce(
F.col("id"),
F.row_number().over(Window.orderBy("id")) + F.lit(max_id)
)
).withColumn(
"last_updated",
F.coalesce(
F.col("last_updated"),
F.current_timestamp()
)
)