如何压平数据帧列

问题描述 投票:1回答:1

鉴于这种格式的数据帧:

{
    "field1": "value1",
    "field2": "value2",
    "elements": [{
        "id": "1",
        "name": "a"
    },
    {
        "id": "2",
        "name": "b"
    },
    {
        "id": "3",
        "name": "c"
    }]
}

我们可以拼合像这样的列:

val exploded = df.withColumn("elements", explode($"elements"))
exploded.show()
 >> +--------+------+------+
 >> |elements|field1|field2|
 >> +--------+------+------+
 >> |   [1,a]|value1|value2|
 >> |   [2,b]|value1|value2|
 >> |   [3,c]|value1|value2|
 >> +--------+------+------+
val flattened = exploded.select("elements.*", "field1", "field2")
flattened.show()
 >> +---+----+------+------+
 >> | id|name|field1|field2|
 >> +---+----+------+------+
 >> |  1|   a|value1|value2|
 >> |  2|   b|value1|value2|
 >> |  3|   c|value1|value2|
 >> +---+----+------+------+

有没有办法让扁平的数据帧没有明确指定其余列?像这样的东西(虽然这不工作)?

val flattened = exploded.select("elements.*", "*")
scala apache-spark apache-spark-sql
1个回答
1
投票

是的,你可以查询exploded的列,然后选择所有除elements

import org.apache.spark.sql.functions.col

val colsToSelect = exploded.columns.filterNot(c => c == "elements").map(col)

val flattened = exploded.select(($"elements.*" +:colsToSelect):_*)
© www.soinside.com 2019 - 2024. All rights reserved.