Spark-sql在Hive中找不到数据?

问题描述 投票:0回答:2

我的Java应用程序代码为:

    SparkSession spark = SparkSession.builder()
        .appName(topics)
        .config("hive.metastore.uris", "thrift://device1:9083")
        .config("spark.sql.warehouse.dir", "/user/hive/warehouse")
        .enableHiveSupport()
        .getOrCreate();

spark.sql("show databases ").show();

// it only prints default
+------------+
|databaseName|
+------------+
|     default|
+------------+

下面是hdfs ls的输出

Found 8 items
drwxrwxr-x   - fangzebin hive          0 2019-08-07 10:10 /user/hive/warehouse/fangzebin.db
drwxrwxr-x   - dennis    hive          0 2020-02-10 16:53 /user/hive/warehouse/kylin_account
drwxrwxr-x   - dennis    hive          0 2020-02-10 16:53 /user/hive/warehouse/kylin_cal_dt
drwxrwxr-x   - dennis    hive          0 2020-02-10 16:53 /user/hive/warehouse/kylin_category_groupings
drwxrwxr-x   - dennis    hive          0 2020-02-10 16:53 /user/hive/warehouse/kylin_country
drwxrwxr-x   - dennis    hive          0 2020-02-10 16:53 /user/hive/warehouse/kylin_sales
drwxrwxr-x   - hive      hive          0 2020-05-06 23:47 /user/hive/warehouse/ods.db
drwxrwxr-x   - root      hive          0 2020-05-16 18:13 /user/hive/warehouse/zhihu.db

我尝试在我的Maven项目的resources / conf /中添加hive-site.xml,但仍然无法正常工作

<?xml version="1.0" encoding="UTF-8"?>

<!--Autogenerated by Cloudera Manager-->
<configuration>
  <property>
    <name>hive.metastore.uris</name>
    <value>thrift://device1:9083</value>
  </property>
  <property>
    <name>hive.metastore.client.socket.timeout</name>
    <value>300</value>
  </property>
  <property>
    <name>hive.metastore.warehouse.dir</name>
    <value>/user/hive/warehouse</value>
  </property>
  <property>
    <name>hive.warehouse.subdir.inherit.perms</name>
    <value>true</value>
  </property>
  <property>
    <name>hive.auto.convert.join</name>
    <value>true</value>
  </property>
  <property>
    <name>hive.auto.convert.join.noconditionaltask.size</name>
    <value>20971520</value>
  </property>
  <property>
    <name>hive.optimize.bucketmapjoin.sortedmerge</name>
    <value>false</value>
  </property>
  <property>
    <name>hive.smbjoin.cache.rows</name>
    <value>10000</value>
  </property>
  <property>
    <name>hive.server2.logging.operation.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>hive.server2.logging.operation.log.location</name>
    <value>/var/log/hive/operation_logs</value>
  </property>
  <property>
    <name>mapred.reduce.tasks</name>
    <value>-1</value>
  </property>
  <property>
    <name>hive.exec.reducers.bytes.per.reducer</name>
    <value>67108864</value>
  </property>
  <property>
    <name>hive.exec.copyfile.maxsize</name>
    <value>33554432</value>
  </property>
  <property>
    <name>hive.exec.reducers.max</name>
    <value>1099</value>
  </property>
  <property>
    <name>hive.vectorized.groupby.checkinterval</name>
    <value>4096</value>
  </property>
  <property>
    <name>hive.vectorized.groupby.flush.percent</name>
    <value>0.1</value>
  </property>
  <property>
    <name>hive.compute.query.using.stats</name>
    <value>true</value>
  </property>
  <property>
    <name>hive.vectorized.execution.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>hive.vectorized.execution.reduce.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>hive.vectorized.use.vectorized.input.format</name>
    <value>true</value>
  </property>
  <property>
    <name>hive.vectorized.use.checked.expressions</name>
    <value>true</value>
  </property>
  <property>
    <name>hive.vectorized.use.vector.serde.deserialize</name>
    <value>false</value>
  </property>
  <property>
    <name>hive.vectorized.adaptor.usage.mode</name>
    <value>chosen</value>
  </property>
  <property>
    <name>hive.vectorized.input.format.excludes</name>
    <value>org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat</value>
  </property>
  <property>
    <name>hive.merge.mapfiles</name>
    <value>true</value>
  </property>
  <property>
    <name>hive.merge.mapredfiles</name>
    <value>false</value>
  </property>
  <property>
    <name>hive.cbo.enable</name>
    <value>false</value>
  </property>
  <property>
    <name>hive.fetch.task.conversion</name>
    <value>minimal</value>
  </property>
  <property>
    <name>hive.fetch.task.conversion.threshold</name>
    <value>268435456</value>
  </property>
  <property>
    <name>hive.limit.pushdown.memory.usage</name>
    <value>0.1</value>
  </property>
  <property>
    <name>hive.merge.sparkfiles</name>
    <value>true</value>
  </property>
  <property>
    <name>hive.merge.smallfiles.avgsize</name>
    <value>16777216</value>
  </property>
  <property>
    <name>hive.merge.size.per.task</name>
    <value>268435456</value>
  </property>
  <property>
    <name>hive.optimize.reducededuplication</name>
    <value>true</value>
  </property>
  <property>
    <name>hive.optimize.reducededuplication.min.reducer</name>
    <value>4</value>
  </property>
  <property>
    <name>hive.map.aggr</name>
    <value>true</value>
  </property>
  <property>
    <name>hive.map.aggr.hash.percentmemory</name>
    <value>0.5</value>
  </property>
  <property>
    <name>hive.optimize.sort.dynamic.partition</name>
    <value>false</value>
  </property>
  <property>
    <name>hive.execution.engine</name>
    <value>mr</value>
  </property>
  <property>
    <name>spark.executor.memory</name>
    <value>5318325043b</value>
  </property>
  <property>
    <name>spark.driver.memory</name>
    <value>966367641b</value>
  </property>
  <property>
    <name>spark.executor.cores</name>
    <value>4</value>
  </property>
  <property>
    <name>spark.yarn.driver.memoryOverhead</name>
    <value>102m</value>
  </property>
  <property>
    <name>spark.yarn.executor.memoryOverhead</name>
    <value>895m</value>
  </property>
  <property>
    <name>spark.dynamicAllocation.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>spark.dynamicAllocation.initialExecutors</name>
    <value>1</value>
  </property>
  <property>
    <name>spark.dynamicAllocation.minExecutors</name>
    <value>1</value>
  </property>
  <property>
    <name>spark.dynamicAllocation.maxExecutors</name>
    <value>2147483647</value>
  </property>
  <property>
    <name>hive.metastore.execute.setugi</name>
    <value>true</value>
  </property>
  <property>
    <name>hive.support.concurrency</name>
    <value>true</value>
  </property>
  <property>
    <name>hive.zookeeper.quorum</name>
    <value>device1,device2,device3</value>
  </property>
  <property>
    <name>hive.zookeeper.client.port</name>
    <value>2181</value>
  </property>
  <property>
    <name>hive.zookeeper.namespace</name>
    <value>hive_zookeeper_namespace_hive</value>
  </property>
  <property>
    <name>hive.cluster.delegation.token.store.class</name>
    <value>org.apache.hadoop.hive.thrift.MemoryTokenStore</value>
  </property>
  <property>
    <name>hive.server2.enable.doAs</name>
    <value>true</value>
  </property>
  <property>
    <name>hive.server2.use.SSL</name>
    <value>false</value>
  </property>
  <property>
    <name>spark.shuffle.service.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>hive.strict.checks.orderby.no.limit</name>
    <value>false</value>
  </property>
  <property>
    <name>hive.strict.checks.no.partition.filter</name>
    <value>false</value>
  </property>
  <property>
    <name>hive.strict.checks.type.safety</name>
    <value>true</value>
  </property>
  <property>
    <name>hive.strict.checks.cartesian.product</name>
    <value>false</value>
  </property>
  <property>
    <name>hive.strict.checks.bucketing</name>
    <value>true</value>
  </property>
</configuration>

没有任何例外,但是我无法弄清楚为什么Spark-SQL无法在我的Hive中找到数据库和表。

这是Hive控制台的输出:

> show databases;
OK
default
fangzebin
kylindb
ods
zhihu
Time taken: 1.69 seconds, Fetched: 5 row(s)

我的Spark版本是:spark-2.4.4-bin-without-hadoop

java apache-spark hive cloudera-cdh
2个回答
0
投票

您已经将hive-site.xml文件添加到类路径,但是不知何故它没有加载该文件。

您可以尝试用IP地址替换device1-thrift://ip_address_of_system:9083并同时添加如下所示的hive-site.xml文件

spark.sparkContext.addFile("hive-site.xml")


0
投票

解决我的案件非常困难。基本上,我使用CDH 6.2中的每个组件。但是我在集群中安装了原始Spark(spark-2.4.4-bin-without-hadoop),并且已经将SPARK_HOME设置为它,因为我更喜欢使用原始Spark。

[我尝试在CDH中使用spark后(请记住先取消注释原始SPAKR_HOME,否则会导致某些问题),它成功读取了Hive中的数据!

分别与火花记录相比,我发现了一些区别。

当我使用原始Spark时,它显示:

20/05/17 21:42:04 WARN spark.SparkContext: Using an existing SparkContext; some configuration may not take effect.
20/05/17 21:42:04 INFO internal.SharedState: loading hive config file: file:/data/software/spark-2.4.4-bin-without-hadoop/conf/hive-site.xml
20/05/17 21:42:04 INFO internal.SharedState: spark.sql.warehouse.dir is not set, but hive.metastore.warehouse.dir is set. Setting spark.sql.warehouse.dir to the value of hive.metastore.warehouse.dir ('/user/hive/warehouse').
20/05/17 21:42:04 INFO internal.SharedState: Warehouse path is '/user/hive/warehouse'.
20/05/17 21:42:04 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@64bfd6fd{/SQL,null,AVAILABLE,@Spark}
20/05/17 21:42:04 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2ab2710{/SQL/json,null,AVAILABLE,@Spark}
20/05/17 21:42:04 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6818d900{/SQL/execution,null,AVAILABLE,@Spark}
20/05/17 21:42:04 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@149f5761{/SQL/execution/json,null,AVAILABLE,@Spark}
20/05/17 21:42:04 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6dcd5639{/static/sql,null,AVAILABLE,@Spark}
20/05/17 21:42:06 INFO state.StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
20/05/17 21:42:10 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
20/05/17 21:42:10 INFO codegen.CodeGenerator: Code generated in 191.505638 ms
20/05/17 21:42:10 INFO codegen.CodeGenerator: Code generated in 8.313303 ms
+------------+
|databaseName|
+------------+
|     default|
+------------+

在CDH中使用Spark时,日志:

20/05/17 21:47:39 INFO client.HiveClientImpl: Warehouse location for Hive client (version 2.1.1) is /user/hive/warehouse
20/05/17 21:47:39 INFO hive.metastore: HMS client filtering is enabled.
20/05/17 21:47:39 INFO hive.metastore: Trying to connect to metastore with URI thrift://device1:9083
20/05/17 21:47:39 INFO hive.metastore: Opened a connection to metastore, current connections: 1
20/05/17 21:47:39 INFO hive.metastore: Connected to metastore.
20/05/17 21:47:39 INFO codegen.CodeGenerator: Code generated in 141.896818 ms
20/05/17 21:47:39 INFO codegen.CodeGenerator: Code generated in 7.683993 ms
+------------+
|databaseName|
+------------+
|     default|
|   fangzebin|
|     kylindb|
|         ods|
|       zhihu|
+------------+

似乎原始火花未能读取此配置的hive.metastore.uris。但是我仍然不知道为什么会这样。

© www.soinside.com 2019 - 2024. All rights reserved.