Spark参数在SparkSubmitOperator-AirFlow中不起作用

问题描述 投票:0回答:1

我已在Spark Submit Operator中的conf中传递了以下spark参数,但在运行作业时看起来好像不起作用。

my_conf = {
        'spark.io.compression.codec' : 'snappy',
        'spark.scheduler.listenerbus.eventqueue.size' : '30000',
        'spark.yarn.queue' : 'pixel',
        'spark.driver.cores' : '5',
        'spark.dynamicAllocation.minExecutors' : '100',
        'spark.dynamicAllocation.maxExecutors' : '300',
        'spark.shuffle.compress' : 'false',
        'spark.sql.tungsten.enabled' : 'true',
        'spark.shuffle.spill' : 'true',
        'spark.sql.parquet.compression.codec' : 'snappy',
        'spark.speculation' : 'true',
        'spark.kryo.referenceTracking' : 'false',
        'spark.hadoop.parquet.block.size' : '134217728',
        'spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version' : '2',
        'spark.executor.memory' : '22g',
        'spark.hadoop.dfs.blocksize' : '134217728',
        'spark.shuffle.manager' : 'sort',
        'spark.driver.memory' : '25g',
        'spark.hadoop.mapreduce.input.fileinputformat.split.minsize' : '134217728',
        'spark.akka.frameSize' : '1024',
        'spark.yarn.executor.memoryOverhead' : '3120',
        'spark.sql.parquet.filterPushdown' : 'true',
        'spark.sql.inMemoryColumnarStorage.compressed' : 'true',
        'spark.hadoop.parquet.enable.summary-metadata' : 'false',
        'spark.serializer' : 'org.apache.spark.serializer.KryoSerializer',
        'spark.rdd.compress' : 'true',
        'spark.task.maxFailures' : '50',
        'spark.yarn.max.executor.failures' : '30',
        'spark.yarn.maxAppAttempts' : '1',
        'spark.default.parallelism' : '2001',
        'spark.network.timeout' : '1200s',
        'spark.hadoop.dfs.client.read.shortcircuit' : 'true',
        'spark.dynamicAllocation.enabled' : 'true',
        'spark.executor.cores' : '5',
        'spark.yarn.driver.memoryOverhead' : '5025',
        'spark.shuffle.consolidateFiles' : 'true',
        'spark.sql.parquet.mergeSchema' : 'false',
        'spark.sql.avro.compression.codec' : 'snappy',
        'spark.hadoop.dfs.domain.socket.path' : '/var/lib/hadoop-hdfs/dn_socket',
        'spark.shuffle.spill.compress' : 'false',
        'spark.sql.caseSensitive' : 'true',
        'spark.hadoop.mapreduce.use.directfileoutputcommitter' : 'true',
        'spark.shuffle.service.enabled' : 'true',
        'spark.driver.maxResultSize' : '0',
        'spark.sql.shuffle.partitions' : '2001'}

下面是在AirFlow中用于运行Spark作业的类

SparkSubmitOperator(
                                 task_id='ml_agg',
                                 application='/home/hdfs/airflow/dags/ML_Agg/ML_Aggregation-assembly-1.0.jar',
                                 conf=my_conf,
                                 conn_id='spark_default',
                 files=None,
                 py_files=None,
                 archives=None,
                 driver_class_path=None,
                 jars=None,
                 java_class='com.pubmatic.ml.MLAggregation_v2',
                 packages='com.databricks:spark-csv_2.11:1.3.0,com.databricks:spark-avro_2.11:2.0.1',
                 exclude_packages=None,
                 repositories=None,
                 keytab=None,
                 principal=None,
                 name='test_airflow_ml_aggregation',
                 application_args=application_args,
                 env_vars=None,
                 verbose=False,
                 spark_binary="spark-submit",
                 dag=my_dag
                 )

还提到了spark_default配置。

{"queue":"default","deploy_mode": "cluster", "spark_home": "", "spark_binary": "spark-submit", "namespace": "default"}

仍然,作业正在纱线上的默认队列上运行。

我还需要做其他事情吗?

apache-spark yarn airflow spark-submit
1个回答
0
投票

spark.yarn.queue被注释掉。您需要取消注释才能在Pixel队列中运行它。

要在spark-submit中使用队列,可以如下运行spark-submit命令:-

spark-submit --master yarn --conf spark.executor.memory=XG --conf spark.driver.memory=YG --packages [packages separated by ,] --queue [queue_name] --class [class_name] [jar_file] [arguments]

© www.soinside.com 2019 - 2024. All rights reserved.