Output Hive表已存储,但Spark当前不填充与Hive兼容的存储输出

问题描述 投票:0回答:1

我有一个Apache Spark(v2.4.2)数据框,我想将此数据框插入到配置单元表中。

df = spark.sparkContext.parallelize([["c1",21, 3], ["c1",32,4], ["c2",4,40089], ["c2",439,6889]]).toDF(["c", "n", "v"])
df.createOrReplaceTempView("df")

并且我创建了一个配置单元表:

 spark.sql("create table if not exists sample_bucket(n INT, v INT)
 partitioned by (c STRING) CLUSTERED BY(n) INTO 3 BUCKETS")

然后我尝试将数据框df中的数据插入sample_bucket表:

 spark.sql("INSERT OVERWRITE table SAMPLE_BUCKET PARTITION(c)  select n, v, c from df")

哪个给我一个错误,说:

 Output Hive table `default`.`sample_bucket` is bucketed but Spark currently 
 does NOT populate bucketed output which is compatible with Hive.;

我尝试了几种无效的方法,其中一种是:

 spark.sql("set hive.exec.dynamic.partition.mode=nonstrict")
 spark.sql("set hive.enforce.bucketing=true")
 spark.sql("INSERT OVERWRITE table SAMPLE_BUCKET PARTITION(c)  select n, v, c from df cluster by n")

但是没有运气,谁能帮助我!

apache-spark hive bucket
1个回答
0
投票

而不是使用sparksql上下文,请查看是否可以使用数据框插入持久性配置单元表。下面是相同的示例代码片段,

df = spark.sparkContext.parallelize([["c1",21, 3], ["c1",32,4], ["c2",4,40089], ["c2",439,6889]]).toDF(["c", "n", "v"])
df.createOrReplaceTempView("df")

df.write
  .partitionBy("c")
  .bucketBy(42, "n")
.saveAsTable("SAMPLE_BUCKET")

请检查是否可以使用。

© www.soinside.com 2019 - 2024. All rights reserved.