将StopWordsRemover和RegexTokenizer应用于spark 2.4.3中的多列

问题描述 投票:1回答:1

我有以下数据框,df4

|Itemno   |fits_assembly_id                                        |fits_assembly_name                                                                         |assembly_name 

|0450056  |13039 135502 141114 4147 138865 2021 9164               |OIL PUMP ASSEMBLY A01EA09CA 4999202399920239A06 A02EA09CA A02EA09CB A02EA09CC              |OIL PUMP ASSEMBLY 999202399920239A06 

并且我正在使用以下代码来处理/清理上述数据框

from pyspark.ml.feature import StopWordsRemover, RegexTokenizer
from pyspark.sql.functions import expr


# Task-1: Regex Tokenizer

tk = RegexTokenizer(pattern=r'(?:\p{Punct}|\s)+', inputCol='fits_assembly_name', outputCol='temp1')
df5 = tk.transform(df4)

#Task-2: StopWordsRemover
sw = StopWordsRemover(inputCol='temp1', outputCol='temp2')
df6 = sw.transform(df5)

# #Task-3: Remove duplicates
df7 = df6.withColumn('fits_assembly_name', expr('concat_ws(" ", array_distinct(temp2))')) \
            .drop('temp1', 'temp2')

我要一次性处理fits_assembly_name中的assembly_nameRegexTokenizer & StopWordsRemover列。您能否分享如何实现?

apache-spark pyspark apache-spark-sql
1个回答
1
投票

您可以使用列表推导来处理多列,使用pyspark.ml.Pipeline跳过中间数据帧,请参见下文:

from pyspark.ml.feature import StopWordsRemover, RegexTokenizer
from pyspark.ml import Pipeline
from pyspark.sql.functions import expr

# df4 is the initial dataframe and new result will overwrite it.
for col in ['fits_assembly_name', 'assembly_name']:
    tk = RegexTokenizer(pattern=r'(?:\p{Punct}|\s)+', inputCol=col, outputCol='temp1')
    sw = StopWordsRemover(inputCol='temp1', outputCol='temp2')
    pipeline = Pipeline(stages=[tk, sw])
    df4 = pipeline.fit(df4).transform(df4) \
        .withColumn(col, expr('concat_ws(" ", array_distinct(temp2))')) \
        .drop('temp1', 'temp2')
© www.soinside.com 2019 - 2024. All rights reserved.