使用MapType文字创建新列

问题描述 投票:0回答:1

我有以下pyspark.DataFrame

+---+--------+--------+--------------+
|SEX|_AGEG5YR|_IMPRACE|       _LLCPWT|
+---+--------+--------+--------------+
|  2|    11.0|     1.0| 79.4259469451|
|  2|    10.0|     1.0| 82.1648291655|
|  2|    11.0|     2.0| 55.7851100058|
|  2|    13.0|     1.0|115.9818718258|
|  2|    12.0|     1.0|194.7566575195|
+---+--------+--------+--------------+

我想基于性别列创建新列如建议的by this previous answer,我定义了一个MapType文字,如下所示

brfss_mapping = {
    "SEX": {
        1: "Male",
        2: "Female",
        9: "Refused"
    }
}
brfss_sex_mapping = create_map(
    [lit(x) for x in chain(*brfss_mapping["SEX"].items())]
)

现在,当我使用withColumnbrfss_sex_mapping.getItem(...)等恒定值时,如下所示>>

brfss_dmy = brfss_dmy.withColumn(
    "SEX_2",
    brfss_sex_mapping.getItem(1)
)

我得到了预期的结果

+---+--------+--------+--------------+-----+                                    
|SEX|_AGEG5YR|_IMPRACE|       _LLCPWT|SEX_2|
+---+--------+--------+--------------+-----+
|  1|    13.0|     1.0|381.8001043164| Male|
|  2|    10.0|     1.0| 82.1648291655| Male|
|  1|    11.0|     1.0|279.1864457296| Male|
|  1|    10.0|     1.0| 439.024136158| Male|
|  2|     8.0|     1.0| 372.921644978| Male|
+---+--------+--------+--------------+-----+

但是,当我尝试按以下方式传递适当的列时(同样,如上一个答案中所建议)

brfss_dmy = brfss_dmy.withColumn(
    "SEX_2",
    brfss_sex_mapping.getItem(col("SEX"))
)

我得到以下java.lang.RuntimeException: Unsupported literal type class org.apache.spark.sql.Column SEX

我有以下pyspark.DataFrame + --- + -------- + -------- + -------------- + | SEX | _AGEG5YR | _IMPRACE | _LLCPWT | + --- + -------- + -------- + -------------- + | 2 | 11.0 | 1.0 | 79.4259469451 | | ...

python apache-spark pyspark apache-spark-sql pyspark-sql
1个回答
0
投票

似乎在Spark 3.0中,我们无法再将列传递给getItem函数,但是在代码或文档中找不到任何引用。

© www.soinside.com 2019 - 2024. All rights reserved.