物理内存超出限制

问题描述 投票:0回答:2

以下是我的火花提交

spark2-submit --class my.class \
--master yarn \
--deploy-mode cluster \
--queue queue-name\
--executor-memory 10G \
--driver-memory 20G \
--num-executors 60 \
--conf spark.executor.memoryOverhead=4G \
--conf spark.yarn.maxAppAttempts=1 \
--conf spark.dynamicAllocation.maxExecutors=480 \
$HOME/myjar.jar param1 param2 param3

错误

Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 50 in stage 27.0 failed 4 times, 
most recent failure: Lost task 50.4 in stage 27.0 (TID 20899, cdts13hdfc07p.rxcorp.com, executor 962): 
ExecutorLostFailure (executor 962 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 
15.7 GB of 14 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.

我的问题

  1. 我分配了 10G 执行程序内存,14 GB 是从哪里来的?
  2. 我已经提到了 4G,这是执行程序内存的 40%,但仍然建议增加开销内存
apache-spark
2个回答
0
投票

您已经为每个 Spark 执行器分配了 10GB,您需要确保运行执行器的机器/节点上有足够的资源来满足其其他需求


0
投票

容器的内存是

memory
+
memoryOverhead
的总和。 所以,你的执行器容器的内存是 10G + 4G = 14G

© www.soinside.com 2019 - 2024. All rights reserved.