调用多列pyspark udf

问题描述 投票:0回答:1

下面的UDF不起作用-我是否正确传递2列并以正确的方式调用该函数?

谢谢!

def shield(x, y):
    if x == '':
       shield = y
    else:
       shield = x
    return shield

df3.withColumn("shield", shield(df3.custavp1, df3.custavp1))
apache-spark pyspark
1个回答
0
投票

我认为将参数传递给udf是不正确的。

下面给出正确的方法:

>>> ls
[[1, 2, 3, 4], [5, 6, 7, 8]]
>>> from pyspark.sql import Row
>>> R = Row("A1", "A2")
>>> df = sc.parallelize([R(*r) for r in zip(*ls)]).toDF()
>>> df.show
<bound method DataFrame.show of DataFrame[A1: bigint, A2: bigint]>
>>> df.show()
+---+---+
| A1| A2|
+---+---+
|  1|  5|
|  2|  6|
|  3|  7|
|  4|  8|
+---+---+

>>> def foo(x,y):
...     if x%2 == 0:
...             return x
...     else:
...             return y
... 
>>> 
>>> from pyspark.sql.functions import udf
>>> from pyspark.sql.types import IntegerType
>>> 
>>> custom_udf = udf(foo, IntegerType())
>>> df1 = df.withColumn("res", custom_udf(col("A1"), col("A2")))
>>> df1.show()
+---+---+---+
| A1| A2|res|
+---+---+---+
|  1|  5|  5|
|  2|  6|  2|
|  3|  7|  7|
|  4|  8|  4|
+---+---+---+

让我知道是否有帮助。

© www.soinside.com 2019 - 2024. All rights reserved.