如何正确配置maxResultSize?

问题描述 投票:0回答:2

我找不到设置驱动程序最大结果大小的方法。以下是我的配置。

conf = pyspark.SparkConf().setAll([("spark.driver.extraClassPath", "/usr/local/bin/postgresql-42.2.5.jar")
                               ,("spark.executor.instances", "4")
                               ,("spark.executor.cores", "4")
                               ,("spark.executor.memories", "10g")
                              ,("spark.driver.memory", "15g")
                               ,("spark.dirver.maxResultSize", "0")
                              ,("spark.memory.offHeap.enabled","true")
                               ,("spark.memory.offHeap.size","20g")])


sc = pyspark.SparkContext(conf=conf)
sc.getConf().getAll()
sqlContext = SQLContext(sc)

在加入 2 个大表并获取收集后,我收到此错误

'Py4JJavaError: An error occurred while calling o292.collectToPython.
: org.apache.spark.SparkException: Job aborted due to stage failure: Total size of serialized results of 101 tasks (1028.8 MB) is bigger than spark.driver.maxResultSize (1024.0 MB)'

我在 stackoverflow 上看到了类似的问题,建议 maxResultsize 但我不知道如何正确地做到这一点。

apache-spark pyspark
2个回答
2
投票

以下应该可以解决问题。另请注意,您拼写错误

("spark.executor.memories", "10g")
。正确的配置是
'spark.executor.memory'

from pyspark.sql import SparkSession

spark = (SparkSession.builder
    .master('yarn') # depends on the cluster manager of your choice
    .appName('StackOverflow')
    .config('spark.driver.extraClassPath', '/usr/local/bin/postgresql-42.2.5.jar')
    .config('spark.executor.instances', 4)
    .config('spark.executor.cores', 4)
    .config('spark.executor.memory', '10g')
    .config('spark.driver.memory', '15g')
    .config('spark.memory.offHeap.enabled', True)
    .config('spark.memory.offHeap.size', '20g')
    .config('spark.driver.maxResultSize', '4096') 
)
sc = spark.sparkContext

或者,试试这个:

from pyspark import SparkContext
from pyspark import SparkConf

conf = SparkConf()
          .setMaster('yarn') \
          .setAppName('StackOverflow') \
          .set('spark.driver.extraClassPath', '/usr/local/bin/postgresql-42.2.5.jar') \
          .set('spark.executor.instances', 4) \
          .set('spark.executor.cores', 4) \
          .set('spark.executor.memory', '10g') \
          .set('spark.driver.memory', '15g') \
          .set('spark.memory.offHeap.enabled', True) \
          .set('spark.memory.offHeap.size', '20g') \
          .set('spark.driver.maxResultSize', '4096') 

spark_context = SparkContext(conf=conf)

0
投票

旧帖子,但有一个拼写错误:“spark.dirver.maxResultSize”。当然应该是“spark.driver.maxResultSize”

© www.soinside.com 2019 - 2024. All rights reserved.