Spark 内存问题 - “没有足够的内存来构建哈希映射”

问题描述 投票:0回答:1

我正在两个表之间进行左连接 - 左表(假设 A)有 14 亿条记录,大小估计器给出的大小为 17GB,右表(B)有 1 亿条记录 - 大小约为 1.3英国。

上述连接的数据帧的输出有 1.2 亿条记录(有过滤器降低了左连接后的记录数)。输出数据帧被使用两次 - 所以我缓存了它。

现在,当我按原样运行该作业时 - 运行了 1 小时 40 分钟。

因此,为了减少处理时间,我尝试在 A 和 B 之间的第一个左连接上实现广播连接。现在问题来了。

我正在使用以下配置参数 -

"spark.ui.threadDumpsEnabled" = "true"
"spark.memory.storageFraction" = "0.5"
"spark.sql.shuffle.partitions" = "400"
"spark.driver.maxResultSize" = "8G"
"spark.sql.broadcastTimeout" = "36000"
"spark.driver.memory" = "30G"
"spark.executor.memory" = "15G"
"spark.executor.cores" = "4"
"spark.executor.instances" = "3"
"spark.kryoserializer.buffer.max" = "1500m"
"spark.driver.cores" = "4"
"spark.dynamicAllocation.enabled" = "false"
"spark.serializer" = "org.apache.spark.serializer.KryoSerializer"
"spark.executor.defaultJavaOptions=-XX:+UseG1GC -Xss100M -XX:+UnlockDiagnosticVMOptions -XX:+G1SummarizeConcMark -XX:InitiatingHeapOccupancyPercent" = "45"
"spark.driver.defaultJavaOptions=-XX:+UseG1GC -Xss100M -XX:+UnlockDiagnosticVMOptions -XX:+G1SummarizeConcMark -XX:InitiatingHeapOccupancyPercent" = "45"

当我遇到许多内存问题时,所有这些参数都相继出现。

"spark.sql.broadcastTimeout" = "36000"
-- 这是因为我面临以下错误

[org.apache.spark.sql.execution.exchange.BroadcastExchangeExec.logError@91] - Could not execute broadcast in 300 secs.
java.util.concurrent.TimeoutException: Futures timed out after [300 seconds]

"spark.kryoserializer.buffer.max" = "1500m"
- 这是因为 -

org.apache.spark.SparkException: Kryo serialization failed: Buffer overflow. Available: 0, required: 4658915. To avoid this, increase spark.kryoserializer.buffer.max value

我也遇到了这些错误-

b) java.lang.OutOfMemoryError: GC overhead limit exceeded
c) java.lang.StackOverflowError

但是,目前,我面临这个问题,这对我来说是全新的,我找不到解决方案 -

Caused by: org.apache.spark.SparkException: There is no enough memory to build hash map
    at org.apache.spark.sql.execution.joins.UnsafeHashedRelation$.apply(HashedRelation.scala:314)
    at org.apache.spark.sql.execution.joins.HashedRelation$.apply(HashedRelation.scala:109)
    at org.apache.spark.sql.execution.joins.HashedRelationBroadcastMode.transform(HashedRelation.scala:857)
    at org.apache.spark.sql.execution.joins.HashedRelationBroadcastMode.transform(HashedRelation.scala:845)
    at org.apache.spark.sql.execution.exchange.BroadcastExchangeExec$$anonfun$relationFuture$1$$anonfun$apply$1.apply(BroadcastExchangeExec.scala:89)
    at org.apache.spark.sql.execution.exchange.BroadcastExchangeExec$$anonfun$relationFuture$1$$anonfun$apply$1.apply(BroadcastExchangeExec.scala:76)
    at org.apache.spark.sql.execution.SQLExecution$$anonfun$withExecutionId$1.apply(SQLExecution.scala:141)
    at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:165)
    at org.apache.spark.sql.execution.SQLExecution$.withExecutionId(SQLExecution.scala:138)
    at org.apache.spark.sql.execution.exchange.BroadcastExchangeExec$$anonfun$relationFuture$1.apply(BroadcastExchangeExec.scala:75)
    at org.apache.spark.sql.execution.exchange.BroadcastExchangeExec$$anonfun$relationFuture$1.apply(BroadcastExchangeExec.scala:75)
    at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
    at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

认为它即将到来,因为直到执行的最后一步我都没有使用任何

action
,我尝试通过使用中间
action
(如
count

)来打破谱系

但问题依然存在,我也尝试过大幅增加驱动内存,但没有成功。

编辑

这是我谈论的第一个查询

val a = spark.sql("""
select 
/*+ BROADCAST(schema.table2) */
visit_date, 
v4.visid, 
visit_num,
device_family,
operating_system,
'test' AS traffic_grouping,
'test2'Junk_Filter,
'test3' visit_start_evar26,
visit_start_pagename,
visit_start_url,
referrer_domain,
session_duration,
isp_domain,
pages_visited,
qbo_sui_click,
qbo_sui_trial_click,
qbo_sui_buy_click,
qbse_sui_click,
qbse_sui_trial_click,
qbse_sui_buy_click,
qbse_customer_flag,
qbo_customer_flag,
case when s.ivid is not null then 1 else 0 end as segment_qbo
from schema.table2 v4
left join schema.table2 s 
    on v4.ivid = s.ivid and v4.visit_date > s.qbo_signup_date and source = 'check'
WHERE visit_date >= '2020-07-14'
and country in ('Australia','Australia')
and  ((ipd_flag = 0
    AND sbbg_flag = 0)
  or lower(visit_start_evar26) like ('%buy%'))
group by 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23
""")

这里我做了一个广播连接。

如果有人有任何线索,那真的会很有帮助,提前致谢!

java scala apache-spark apache-spark-sql heap-memory
1个回答
0
投票

我面临着同样的问题,我在Databricks网站上找到了这篇文章https://kb.databricks.com/en_US/python/job-fails-with-not-enough-memory-to-build-哈希映射错误。显然,在 Databricks Runtime 11.3 LTS 及更高版本上,您应该使用自适应查询执行而不是显式广播提示来执行连接

© www.soinside.com 2019 - 2024. All rights reserved.