DF insertInto不会为混合结构化数据(json,string)保留所有列

问题描述 投票:1回答:2

DataFrame saveAsTable正确地保存所有列值,但insertInto函数不存储所有列,尤其是json数据被截断,而后续列未存储hive表。

我们的环境

  • Spark 2.2.0
  • EMR 5.10.0
  • 比例2.11.8

样本数据是

 a8f11f90-20c9-11e8-b93e-2fc569d27605   efe5bdb3-baac-5d8e-6cae57771c13 Unknown E657F298-2D96-4C7D-8516-E228153FE010    NonDemarcated       {"org-id":"efe5bdb3-baac-5d8e-6cae57771c13","nodeid":"N02c00056","parkingzoneid":"E657F298-2D96-4C7D-8516-E228153FE010","site-id":"a8f11f90-20c9-11e8-b93e-2fc569d27605","channel":1,"type":"Park","active":true,"tag":"","configured_date":"2017-10-23
 23:29:11.20","vs":[5.0,1.7999999523162842,1.5]}

DF SaveAsTable

val spark = SparkSession.builder().appName("Spark SQL Test").
config("hive.exec.dynamic.partition", "true").
config("hive.exec.dynamic.partition.mode", "nonstrict").
enableHiveSupport().getOrCreate()

val zoneStatus = spark.table("zone_status")

zoneStatus.select(col("site-id"),col("org-id"), col("groupid"), col("zid"), col("type"), lit(0), col("config"), unix_timestamp().alias("ts")).
write.mode(SaveMode.Overwrite).saveAsTable("dwh_zone_status")

在结果表中正确存储数据:

a8f11f90-20c9-11e8-b93e-2fc569d27605    efe5bdb3-baac-5d8e-6cae57771c13 Unknown E657F298-2D96-4C7D-8516-E228153FE010    NonDemarcated   0   {"org-id":"efe5bdb3-baac-5d8e-6cae57771c13","nodeid":"N02c00056","parkingzoneid":"E657F298-2D96-4C7D-8516-E228153FE010","site-id":"a8f11f90-20c9-11e8-b93e-2fc569d27605","channel":1,"type":"Park","active":true,"tag":"","configured_date":"2017-10-23 23:29:11.20","vs":[5.0,1.7999999523162842,1.5]} 1520453589

DF insertInto

zoneStatus.
  select(col("site-id"),col("org-id"), col("groupid"), col("zid"), col("type"), lit(0), col("config"), unix_timestamp().alias("ts")).
  write.mode(SaveMode.Overwrite).insertInto("zone_status_insert")

但是insertInto并不是持久化所有内容。 json字符串部分存储,不存储后续列。

a8f11f90-20c9-11e8-b93e-2fc569d27605    efe5bdb3-baac-5d8e-6cae57771c13 Unknown E657F298-2D96-4C7D-8516-E228153FE010    NonDemarcated   0   {"org-id":"efe5bdb3-baac-5d8e-6cae57771c13"  NULL

我们在项目中使用insertInto函数,最近在解析json数据时会遇到其他指标。我们注意到配置内容未完全存储。计划更改为saveAsTable但我们可以避免代码更改,如果可以在spark配置中添加任何可用的解决方法。

scala apache-spark apache-spark-sql spark-dataframe
2个回答
0
投票

您可以使用以下替代方法将数据插入表中。

val zoneStatusDF = zoneStatus.
  select(col("site-id"),col("org-id"), col("groupid"), col("zid"), col("type"), lit(0), col("config"), unix_timestamp().alias("ts"))

zoneStatusDF.registerTempTable("zone_status_insert ")

要么

zoneStatus.sqlContext.sql("create table zone_status_insert as select * from zone_status")  

0
投票

原因是创建了架构

ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE

删除ROW FORMAT DELIMITED FIELDS TERMINATED BY'后,可以使用insertInto保存整个内容。

© www.soinside.com 2019 - 2024. All rights reserved.