使用提供为JSON文件的模式创建数据框

问题描述 投票:1回答:2

如何使用2个JSON文件创建pyspark数据框?

  • 文件1:此文件包含完整的数据
  • 文件2:此文件仅具有文件1数据的架构。

file1

{"RESIDENCY":"AUS","EFFDT":"01-01-1900","EFF_STATUS":"A","DESCR":"Australian Resident","DESCRSHORT":"Australian"}

file2

[{"fields":[{"metadata":{},"name":"RESIDENCY","nullable":true,"type":"string"},{"metadata":{},"name":"EFFDT","nullable":true,"type":"string"},{"metadata":{},"name":"EFF_STATUS","nullable":true,"type":"string"},{"metadata":{},"name":"DESCR","nullable":true,"type":"string"},{"metadata":{},"name":"DESCRSHORT","nullable":true,"type":"string"}],"type":"struct"}]
pyspark apache-spark-sql pyspark-sql pyspark-dataframes
2个回答
2
投票

首先,您必须使用Python json.load阅读架构文件,然后使用DataType将其转换为StructType.fromJson

StructType.fromJson

现在只需将该架构传递给DataFrame Reader:

import json
from pyspark.sql.types import StructType

with open("/path/to/file2.json") as f:
    json_schema = json.load(f)

schema = StructType.fromJson(json_schema[0])

编辑:

如果包含架构的文件位于GCS中,则可以使用Spark或Hadoop API来获取文件内容。这是一个使用Spark的示例:

df = spark.read.schema(schema).json("/path/to/file1.json")

df.show()

#+---------+----------+----------+-------------------+----------+
#|RESIDENCY|     EFFDT|EFF_STATUS|              DESCR|DESCRSHORT|
#+---------+----------+----------+-------------------+----------+
#|      AUS|01-01-1900|         A|Australian Resident|Australian|
#+---------+----------+----------+-------------------+----------+

0
投票

我发现GCSFS软件包可以访问GCP存储桶中的文件:

file_content = spark.read.text("/path/to/file2.json").rdd.map(
    lambda r: " ".join([str(elt) for elt in r])
).reduce(
    lambda x, y: "\n".join([x, y])
)

json_schema = json.loads(file_content)
© www.soinside.com 2019 - 2024. All rights reserved.