为什么运行pandas_udf时Pyspark失败?

问题描述 投票:0回答:1

我在PySpark中运行熊猫UDF时遇到此错误。这是使用外部库textdistance的UDF:

def algoritmos_comparacion(num_serie_rec, num_serie_exp):
    d = textdistance.hamming(num_serie_rec, num_serie_exp)
    return str(d)

然后我注册功能:

algoritmos_comparacion_udf = f.pandas_udf(algoritmos_comparacion, StringType())

最后我使用这个udf:

df.withColumn("hamming", algoritmos_comparacion_udf(f.col("num_serie_exp"), f.col("num_serie_rec")))

我已安装pandas和pyarrow版本0.8.0。我收到此错误:

TypeError: 'Series' objects are mutable, thus they cannot be hashed

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/DATOS/var/log/hadoop/yarn/local/usercache/bguser/appcache/application_1563894657824_0447/container_e66_1563894657824_0447_01_000002/pyspark.zip/pyspark/worker.py", line 235, in main
    process()
  File "/DATOS/var/log/hadoop/yarn/local/usercache/bguser/appcache/application_1563894657824_0447/container_e66_1563894657824_0447_01_000002/pyspark.zip/pyspark/worker.py", line 230, in process
    serializer.dump_stream(func(split_index, iterator), outfile)
  File "/DATOS/var/log/hadoop/yarn/local/usercache/bguser/appcache/application_1563894657824_0447/container_e66_1563894657824_0447_01_000002/pyspark.zip/pyspark/serializers.py", line 267, in dump_stream
    for series in iterator:
  File "<string>", line 1, in <lambda>
  File "/DATOS/var/log/hadoop/yarn/local/usercache/bguser/appcache/application_1563894657824_0447/container_e66_1563894657824_0447_01_000002/pyspark.zip/pyspark/worker.py", line 92, in <lambda>
    return lambda *a: (verify_result_length(*a), arrow_return_type)
  File "/DATOS/var/log/hadoop/yarn/local/usercache/bguser/appcache/application_1563894657824_0447/container_e66_1563894657824_0447_01_000002/pyspark.zip/pyspark/worker.py", line 83, in verify_result_length
    result = f(*a)
  File "/DATOS/var/log/hadoop/yarn/local/usercache/bguser/appcache/application_1563894657824_0447/container_e66_1563894657824_0447_01_000002/pyspark.zip/pyspark/util.py", line 55, in wrapper
    return f(*args, **kwargs)
  File "/home/bguser/SII-IVA/jobs/caso3/caso3.py", line 39, in algoritmos_comparacion
  File "/DATOS/var/log/hadoop/yarn/local/usercache/bguser/appcache/application_1563894657824_0447/container_e66_1563894657824_0447_01_000002/virtualenv_application_1563894657824_0447_0/lib/python3.6/site-packages/textdistance/algorithms/edit_based.py", line 49, in __call__
    result = self.quick_answer(*sequences)
  File "/DATOS/var/log/hadoop/yarn/local/usercache/bguser/appcache/application_1563894657824_0447/container_e66_1563894657824_0447_01_000002/virtualenv_application_1563894657824_0447_0/lib/python3.6/site-packages/textdistance/algorithms/base.py", line 91, in quick_answer
    if self._ident(*sequences):
  File "/DATOS/var/log/hadoop/yarn/local/usercache/bguser/appcache/application_1563894657824_0447/container_e66_1563894657824_0447_01_000002/virtualenv_application_1563894657824_0447_0/lib/python3.6/site-packages/textdistance/algorithms/base.py", line 110, in _ident
    if e1 != e2:
  File "/DATOS/var/log/hadoop/yarn/local/usercache/bguser/appcache/application_1563894657824_0447/container_e66_1563894657824_0447_01_000002/virtualenv_application_1563894657824_0447_0/lib/python3.6/site-packages/pandas/core/generic.py", line 1556, in __nonzero__
    self.__class__.__name__
ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().

我该如何解决此错误?

要复制它,您可以使用库textdistance的任何算法来运行pandas_udf。例如:

import textdistance
import pyspark.sql.functions as f

def algoritmos_comparacion(num_serie_rec, num_serie_exp):
    data = {}
    algoritmos = {
        "hamming":textdistance.hamming,
        "levenshtein":textdistance.levenshtein,
        "damerau_levenshtein":textdistance.damerau_levenshtein,
        "jaro":textdistance.jaro,
        "mlipns":textdistance.mlipns,
        "strcmp95":textdistance.strcmp95,
        "needleman_wunsch":textdistance.needleman_wunsch,
        "gotoh":textdistance.gotoh,
        "smith_waterman":textdistance.smith_waterman
    }
    for name, alg in algoritmos.items():
        try:
            data[name] = str(alg(num_serie_rec, num_serie_exp))
        except:
            data[name] = "ERROR"
    return data

algoritmos_comparacion_udf=f.pandas_udf(algoritmos_comparacion,MapType(StringType(),StringType()))

dataframe.withColumn("algorithms", algoritmos_comparacion_udf(f.col("a"), f.col("b")))

谢谢。

pandas apache-spark pyspark pyarrow
1个回答
0
投票

已解决:

algoritmos_comparacion_udf=f.pandas_udf(lambda s: s.apply(algoritmos_comparacion),MapType(StringType(),StringType()))

dataframe.withColumn("algorithms", algoritmos_comparacion_udf(f.col("a"), f.col("b")))
© www.soinside.com 2019 - 2024. All rights reserved.