不同匹配级别的spark加入

问题描述 投票:0回答:1

我有两个火花数据帧:

df1 = sc.parallelize([
    ['a', '1', 'value1'],
    ['b', '1', 'value2'],
    ['c', '2', 'value3'],
    ['d', '4', 'value4'],
    ['e', '2', 'value5'],
    ['f', '4', 'value6']
]).toDF(('id1', 'id2', 'v1'))

df2 = sc.parallelize([
    ['a','1', 1],
    ['b','1', 1],
    ['y','2', 4],
    ['z','2', 4]
]).toDF(('id1', 'id2', 'v2'))

他们每个都有字段id1和id2(可能包含很多id)。首先,我需要通过id1匹配df1和df2。然后,我需要通过id2等匹配来自两个数据帧的所有不匹配记录。

我的方式是:

def joinA(df1,df2, field):
    from pyspark.sql.functions import lit

    L = 'L_'
    R = 'R_'
    Lfield = L+field
    Rfield = R+field

    # Taking field's names
    df1n = df1.schema.names
    df2n = df2.schema.names
    newL = [L+fld for fld in df1n]
    newR = [R+fld for fld in df2n]

    # drop duplicates by input field
    df1 = df1.toDF(*newL).dropDuplicates([Lfield])
    df2 = df2.toDF(*newR).dropDuplicates([Rfield])

    # matching records
    df_full = df1.join(df2,df1[Lfield]==df2[Rfield],how = 'outer').cache()

    # unmatched records from df1
    df_left = df_full.where(df2[Rfield].isNull()).select(newL).toDF(*df1n)
    # unmatched records from df2
    df_right = df_full.where(df1[Lfield].isNull()).select(newR).toDF(*df2n)
    # matched records and adding match level
    df_inner = df_full.where(\
        (~df1[Lfield].isNull()) & (~df2[Rfield].isNull())\
    ).withColumn('matchlevel',lit(field))

    return df_left, df_inner, df_right


first_l,first_i,first_r = joinA(df1,df2,'id1')
second_l,second_i,second_r = joinA(first_l,first_r,'id2')

result = first_i.union(second_i)

有没有办法让它更容易?或者这项工作的一些标准工具?

谢谢,

最大

apache-spark join pyspark bigdata data-analysis
1个回答
0
投票

我有另一种方法可以做到这一点......但我不确定它比你的解决方案更好:

from pyspark.sql import functions as F

id_cols = [cols for cols in df1.columns if cols != 'v1']

df1 = df1.withColumn("get_v2", F.lit(None))
df1 = df1.withColumn("match_level", F.lit(None))

for col in id_cols:
    new_df1 = df1.join(
        df2.select(
            col, 
            "v2"
        ),
        on=(
            (df1[col] == df2[col])
            & df1['get_v2'].isNull()
        ),
        how='left'
    )

    new_df1 = new_df1.withColumn(
        "get_v2",
        F.coalesce(df1.get_v2, df2.v2)
    ).drop(df2[col]).drop(df2.v2)

    new_df1 = new_df1.withColumn(
      "match_level",
      F.when(F.col("get_v2").isNotNull(), F.coalesce(F.col("match_level"), F.lit(col)))
    )

    df1 = new_df1

df1.show()
+---+---+---+------+------+-----------+
|id1|id2|id3|    v1|get_v2|match_level|
+---+---+---+------+------+-----------+
|  f|  4|  1|value6|     3|        id3|
|  d|  4|  1|value4|     3|        id3|
|  c|  2|  1|value3|     4|        id2|
|  c|  2|  1|value3|     4|        id2|
|  e|  2|  1|value5|     4|        id2|
|  e|  2|  1|value5|     4|        id2|
|  b|  1|  1|value2|     1|        id1|
|  a|  1|  1|value1|     1|        id1|
+---+---+---+------+------+-----------+

这将导致N-joins,其中N是您获得的ID数。

编辑:添加match_level!

© www.soinside.com 2019 - 2024. All rights reserved.