配置大型Hive导入作业

问题描述 投票:0回答:1

我是一个新手,并试图采取一个大的(1.25 TB未压缩)hdfs文件,并将其放入Hive托管表。它已经是csv格式的HDFS(来自sqoop),带有一个任意分区,我把它放到一个更有条理的格式中进行查询和连接。我正在使用Tez的HDP 3.0。这是我的hql

USE MYDB;

DROP TABLE IF EXISTS new_table;

CREATE TABLE IF NOT EXISTS new_table (
 svcpt_id VARCHAR(20),
 usage_value FLOAT,
 read_time SMALLINT)
PARTITIONED BY (read_date INT)
CLUSTERED BY (svcpt_id) INTO 9600 BUCKETS
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
STORED AS ORC
TBLPROPERTIES("orc.compress"="snappy");

SET hive.exec.dynamic.partition=true;
SET hive.exec.dynamic.partition.mode=nonstrict;
SET hive.exec.max.dynamic.partitions.pernode=2000;
SET hive.exec.max.dynamic.partitions=10000;
SET hive.vectorized.execution.enabled = true;
SET hive.vectorized.execution.reduce.enabled = true;
SET hive.enforce.bucketing = true;
SET mapred.reduce.tasks = 10000;

INSERT OVERWRITE TABLE new_table
PARTITION (read_date)
SELECT svcpt_id, usage, read_time, read_date
FROM raw_table;

Tez设置的方式是(从我最近的失败):

--------------------------------------------------------------------------------
VERTICES      STATUS  TOTAL  COMPLETED  RUNNING  PENDING  FAILED  KILLED
--------------------------------------------------------------------------------
Map 1      SUCCEEDED   1043       1043        0        0       0       0
Reducer 2    RUNNING   9600        735       19     8846       0       0
Reducer 3     INITED  10000          0        0    10000       0       0
--------------------------------------------------------------------------------
VERTICES: 01/03  [==>>------------------------] 8%    ELAPSED TIME: 45152.59 s
--------------------------------------------------------------------------------

我一直在研究这个问题。起初我无法让第一个map 1顶点运行所以我添加了桶。 96个桶有第一个映射器运行,但reducer 2未能引用磁盘空间的问题没有意义。然后我将桶的数量增加到9600并将任务减少到10000并且reduce 2顶点开始运行,尽管速度很慢。今天早上我发现它已经出错了,因为我的namenode由于垃圾收集器的java堆空间错误而关闭了。

有没有人对我有任何指导意见?我觉得我在黑暗中拍摄了减少任务的数量,铲斗的数量以及下面显示的所有配置。

hive.tez.container.size = 5120MB
hive.exec.reducers.bytes.per.reducer = 1GB
hive.exec.max.dynamic.partitions = 5000
hive.optimize.sort.dynamic.partition = FALSE
hive.vectorized.execution.enabled = TRUE
hive.vectorized.execution.reduce.enabled = TRUE
yarn.scheduler.minimum-allocation-mb = 2G
yarn.scheduler.maximum-allocation-mb = 8G
mapred.min.split.size=?
mapred.max.split.size=?
hive.input.format=?
mapred.min.split.size=?

尚未设置LLAP

我的群集有4个节点,32个核心和120 GB内存。我没有使用超过群集存储的1/3。

hadoop hive hortonworks-data-platform apache-tez
1个回答
0
投票
SET hive.execution.engine = tez;
SET hive.vectorized.execution.enabled = false;
SET hive.vectorized.execution.reduce.enabled = false;
SET hive.enforce.bucketing = true;
SET hive.exec.dynamic.partition.mode = nonstrict;
SET hive.stats.autogather = true;
SET hive.exec.parallel = true;
SET hive.exec.parallel.thread.number = 60;
SET mapreduce.job.skiprecords = true;
SET mapreduce.map.maxattempts =10;
SET mapreduce.reduce.maxattempts =10;
SET mapreduce.map.skip.maxrecords = 300;
SET mapreduce.task.skip.start.attempts = 1;
SET mapreduce.output.fileoutputformat.compress = false;
SET mapreduce.job.reduces = 1000;

您可以尝试以上某些设置!

© www.soinside.com 2019 - 2024. All rights reserved.