UDF的输入类型应为以下类型的列-类型为StructType或“ null”的数组?

问题描述 投票:0回答:1

我的DataFrame的架构如下:

root
 |-- col1: string (nullable = true)
 |-- col2: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- unit1: string (nullable = true)
 |    |    |-- sum(unit2): string (nullable = true)
 |    |    |-- max(unit3): string (nullable = true)
 |-- col3: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- unit1: string (nullable = true)
 |    |    |-- sum(unit2): string (nullable = true)
 |    |    |-- max(unit3): string (nullable = true)

[我正在用Scala写一个UDF,它采用cols-col2和col3。考虑到col2的值,我传递给UDF的每一列的输入类型应该是“ null”

val process_stuff = udf((col2: ???, col3: ??? ) => {

到目前为止,我已经尝试过此方法和其他方法

val process_stuff = udf((col2:ArrayType[StructType[StructField]], col3:ArrayType[StructType[StructField]]) => {

但是它到处警告我请帮助!

apache-spark apache-spark-sql user-defined-functions
1个回答
0
投票

您的UDF应该具有以下签名:

val process_stuff = udf((col2: Seq[Row], col3: Seq[Row]) => {...})
© www.soinside.com 2019 - 2024. All rights reserved.