转置具有许多列的数据帧

问题描述 投票:0回答:1

这是我的数据帧架构:

 `root
 |-- customerid: string (nullable = true)
 |-- event: string (nullable = true)
 |-- groupe1: string (nullable = false)
 |-- groupe2: string (nullable = false)
 |-- groupe3: string (nullable = false)

这是我的数据框的一部分

+----------------+--------+--------------------+--------------+----------------+
|customerid|     |  event | group1             | group2       |    groupe3     |
+----------------+--------+--------------------+--------------+----------------+
|     4454545    |        |[aaa,0,0,0]         |[555,0,88,0,0]| [3190,0,0,0,0] |
|     8878787787 |2019    |[bbb,0,fff,0,0]     | [420,0,0,0,0]| [9580,0,0,0,0] |
|     12555888888|2019    |[cccc,0,fff,eee,0]  | [385,0,0,0,0]| [4995,0,0,0,0] |
+----------------+--------------------+--------------------+-------------------+

我试过这段代码:

val zip = udf((xs: Seq[String], ys: Seq[String], zs: Seq[String]) => (xs, ys, zs).zipped.toSeq)

df.printSchema

val df4=df.withColumn("vars", explode(zip($"groupe1", $"groupe2",$"groupe3"))).select(
   $"customerid", $"event",
   $"vars._1".alias("groupe1"), $"vars._2".alias("groupe2"),$"vars._2".alias("groupe3"))

我收到了这个错误:

cannot resolve 'UDF(groupe1, groupe2, groupe3)' due to data type mismatch: argument 1 requires array<string> type, however, '`groupe1`' is of string type. argument 2 requires array<string> type, however, '`groupe2`' is of string type. argument 3 requires array<string> type, however, '`groupe3`' is of string type.;;
scala apache-spark bigdata
1个回答
0
投票

列group1,group2,group3的类型是字符串,因此它与具有Seq [string]参数的udf不兼容。也许你应该将udf的输入更改为字符串类型。

© www.soinside.com 2019 - 2024. All rights reserved.