是否有通用的方法来读取spark中的多线json。更具体的是火花?

问题描述 投票:0回答:1

我有一个这样的多线json

{“_ id”:{“$ oid”:“50b59cd75bed76f46522c34e”},“student_id”:0,“class_id”:2,“得分”:[{“type”:“exam”,“score”:57.92947112575566},{ “type”:“quiz”,“score”:21.24542588206755},{“type”:“homework”,“score”:68.19567810587429},{“type”:“homework”,“score”:67.95019716560351},{“type “:”homework“,”得分“:18.81037253352722}]}

这只是json的一行。还有其他文件。我正在寻找一种方法来在pyspark / spark中读取这个文件。它可以独立于json格式吗?

我需要以“得分”形式作为单个列的输出,例如得分应该是一列,值为57.92947112575566,得分为另一列,值为21.24542588206755。

任何帮助表示赞赏。

python json apache-spark pyspark
1个回答
2
投票

是。

使用multiline true选项

from pyspark.sql.functions import explode, col

val df = spark.read.option("multiline", "true").json("multi.json")

你得到低于输出。

+--------------------------+--------+--------------------------------------------------------------------------------------------------------------------------------------------------+----------+
|_id                       |class_id|scores                                                                                                                                            |student_id|
+--------------------------+--------+--------------------------------------------------------------------------------------------------------------------------------------------------+----------+
|[50b59cd75bed76f46522c34e]|2       |[[57.92947112575566, exam], [21.24542588206755, quiz], [68.1956781058743, homework], [67.95019716560351, homework], [18.81037253352722, homework]]|0         |
+--------------------------+--------+--------------------------------------------------------------------------------------------------------------------------------------------------+----------+

添加这些行来获取

  val df2= df.withColumn("scores",explode(col("scores")))
      .select(col("_id.*"), col("class_id"),col("scores.*"),col("student_id"))

+------------------------+--------+-----------------+--------+----------+
|$oid                    |class_id|score            |type    |student_id|
+------------------------+--------+-----------------+--------+----------+
|50b59cd75bed76f46522c34e|2       |57.92947112575566|exam    |0         |
|50b59cd75bed76f46522c34e|2       |21.24542588206755|quiz    |0         |
|50b59cd75bed76f46522c34e|2       |68.1956781058743 |homework|0         |
|50b59cd75bed76f46522c34e|2       |67.95019716560351|homework|0         |
|50b59cd75bed76f46522c34e|2       |18.81037253352722|homework|0         |
+------------------------+--------+-----------------+--------+----------+

请注意,我们正在使用spark中的“col”和“explode”函数,因此,您需要执行以下导入才能使这些函数正常工作。

来自pyspark.sql.functions import explode,col

您可以在下面的页面上更多地了解如何使用多行解析JSON文件。

https://docs.databricks.com/spark/latest/data-sources/read-json.html

谢谢

© www.soinside.com 2019 - 2024. All rights reserved.