运行 Spark-shell 时出现错误:SparkContext:初始化 SparkContext 时出错

问题描述 投票:0回答:2

我在三个节点上成功安装了spark。我可以访问 Spark Web UI 并发现每个工作节点和主节点都处于活动状态。

我可以成功运行 SparkPi 示例。

我的集群信息: 10.45.10.33(master&worker,hadoop-master,hadoop-slave) 10.45.10.34(工作人员,hadoop-从属) 10.45.10.35(工作人员,hadoop-从属)

但是当我尝试运行“spark-shell --master yarn”时,它给出了异常:

16/09/12 19:50:29 ERROR SparkContext: Error initializing SparkContext.
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
    at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:85)
    at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:62)
    at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:149)
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:500)
    at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2256)
    at org.apache.spark.sql.SparkSession$Builder$$anonfun$8.apply(SparkSession.scala:831)
    at org.apache.spark.sql.SparkSession$Builder$$anonfun$8.apply(SparkSession.scala:823)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:823)
    at org.apache.spark.repl.Main$.createSparkSession(Main.scala:101)
    at $line3.$read$$iw$$iw.<init>(<console>:15)
    at $line3.$read$$iw.<init>(<console>:31)
    at $line3.$read.<init>(<console>:33)
    at $line3.$read$.<init>(<console>:37)
    at $line3.$read$.<clinit>(<console>)
    at $line3.$eval$.$print$lzycompute(<console>:7)
    at $line3.$eval$.$print(<console>:6)
    at $line3.$eval.$print(<console>)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:786)
    at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1047)
    at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:638)
    at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:637)
    at scala.reflect.internal.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:31)
    at scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:19)
    at scala.tools.nsc.interpreter.IMain$WrappedRequest.loadAndRunReq(IMain.scala:637)
    at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:569)
    at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:565)
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:807)
    at scala.tools.nsc.interpreter.ILoop.command(ILoop.scala:681)
    at scala.tools.nsc.interpreter.ILoop.processLine(ILoop.scala:395)
    at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply$mcV$sp(SparkILoop.scala:38)
    at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:37)
    at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:37)
    at scala.tools.nsc.interpreter.IMain.beQuietDuring(IMain.scala:214)
    at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:37)
    at org.apache.spark.repl.SparkILoop.loadFiles(SparkILoop.scala:94)
    at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply$mcZ$sp(ILoop.scala:920)
    at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909)
    at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909)
    at scala.reflect.internal.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:97)
    at scala.tools.nsc.interpreter.ILoop.process(ILoop.scala:909)
    at org.apache.spark.repl.Main$.doMain(Main.scala:68)
    at org.apache.spark.repl.Main$.main(Main.scala:51)
    at org.apache.spark.repl.Main.main(Main.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:729)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:185)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:210)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
16/09/12 19:50:29 WARN YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to request executors before the AM has registered!
16/09/12 19:50:29 WARN MetricsSystem: Stopping a MetricsSystem that is not running
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
  at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:85)
  at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:62)
  at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:149)
  at org.apache.spark.SparkContext.<init>(SparkContext.scala:500)
  at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2256)
  at org.apache.spark.sql.SparkSession$Builder$$anonfun$8.apply(SparkSession.scala:831)
  at org.apache.spark.sql.SparkSession$Builder$$anonfun$8.apply(SparkSession.scala:823)
  at scala.Option.getOrElse(Option.scala:121)
  at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:823)
  at org.apache.spark.repl.Main$.createSparkSession(Main.scala:101)
  ... 47 elided
<console>:14: error: not found: value spark
       import spark.implicits._
              ^
<console>:14: error: not found: value spark
       import spark.sql
              ^
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.0.0
      /_/

Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_77)
Type in expressions to have them evaluated.
Type :help for more information.

scala> 

这是我的配置:

1.spark-env.sh

export JAVA_HOME=/root/Downloads/jdk1.8.0_77
export SPARK_HOME=/root/Downloads/spark-2.0.0-bin-without-hadoop
export HADOOP_HOME=/root/Downloads/hadoop-2.7.2
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export SPARK_DIST_CLASSPATH=$(/root/Downloads/hadoop-2.7.2/bin/hadoop classpath)
export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
export SPARK_LIBARY_PATH=.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib:$HADOOP_HOME/lib/native
SPARK_MASTER_HOST=10.45.10.33
SPARK_MASTER_WEBUI_PORT=28686
SPARK_LOCAL_DIRS=/root/Downloads/spark-2.0.0-bin-without-hadoop/sparkdata/local
SPARK_WORKER_DIR=/root/Downloads/spark-2.0.0-bin-without-hadoop/sparkdata/work
SPARK_LOG_DIR=/root/Downloads/spark-2.0.0-bin-without-hadoop/logs
  1. spark-defaults.conf

    spark.eventLog.enabled true Spark.eventLog.dir hdfs://10.45.10.33/spark-event-log

3.奴隶

10.45.10.33
10.45.10.34
10.45.10.35

这是一些日志信息: 纱线作业日志:

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/root/Downloads/hadoop-2.7.2/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/root/Downloads/hadoop-2.7.2/share/hadoop/common/lib/alluxio-core-client-1.2.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/root/Downloads/alluxio-master/core/client/target/alluxio-core-client-1.2.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/09/14 11:21:08 INFO SignalUtils: Registered signal handler for TERM
16/09/14 11:21:08 INFO SignalUtils: Registered signal handler for HUP
16/09/14 11:21:08 INFO SignalUtils: Registered signal handler for INT
16/09/14 11:21:14 INFO ApplicationMaster: Preparing Local resources
16/09/14 11:21:15 ERROR ApplicationMaster: RECEIVED SIGNAL TERM

runnong 节点上的纱线日志:

2016-09-14 01:26:41,321 WARN alluxio.logger.type: Worker Client last execution took 2271 ms. Longer than the interval 1000
2016-09-14 06:13:10,905 WARN alluxio.logger.type: Worker Client last execution took 1891 ms. Longer than the interval 1000
2016-09-14 08:41:36,122 WARN alluxio.logger.type: Worker Client last execution took 1625 ms. Longer than the interval 1000
2016-09-14 10:41:49,426 WARN alluxio.logger.type: Worker Client last execution took 2441 ms. Longer than the interval 1000
2016-09-14 11:18:44,355 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1473752235721_0009_000002 (auth:SIMPLE)
2016-09-14 11:18:45,319 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Start request for container_1473752235721_0009_02_000001 by user root
2016-09-14 11:18:45,447 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Creating a new application reference for app application_1473752235721_0009
2016-09-14 11:18:45,601 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=root IP=10.45.10.33  OPERATION=Start Container Request   TARGET=ContainerManageImpl  RESULT=SUCCESS  APPID=application_1473752235721_0009    CONTAINERID=container_1473752235721_0009_02_000001
2016-09-14 11:18:45,811 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl: Application application_1473752235721_0009 transitioned from NEW to INITING
2016-09-14 11:18:45,815 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl: Adding container_1473752235721_0009_02_000001 to application application_1473752235721_0009
2016-09-14 11:18:45,865 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl: Application application_1473752235721_0009 transitioned from INITING to RUNNING
2016-09-14 11:18:46,060 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl: Container container_1473752235721_0009_02_000001 transitioned from NEW to LOCALIZING
2016-09-14 11:18:46,060 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_INIT for appId application_1473752235721_0009
2016-09-14 11:18:46,211 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://10.45.10.33:8020/user/root/.sparkStaging/application_1473752235721_0009/__spark_libs__8339309767420855025.zip transitioned from INIT to DOWNLOADING
2016-09-14 11:18:46,211 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://10.45.10.33:8020/user/root/.sparkStaging/application_1473752235721_0009/__spark_conf__.zip transitioned from INIT to DOWNLOADING
2016-09-14 11:18:46,223 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Created localizer for container_1473752235721_0009_02_000001
2016-09-14 11:18:47,083 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Writing credentials to the nmPrivate file /tmp/hadoop-root/nm-local-dir/nmPrivate/container_1473752235721_0009_02_000001.tokens. Credentials list: 
2016-09-14 11:18:47,658 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Initializing user root
2016-09-14 11:18:47,761 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Copying from /tmp/hadoop-root/nm-local-dir/nmPrivate/container_1473752235721_0009_02_000001.tokens to /tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1473752235721_0009/container_1473752235721_0009_02_000001.tokens
2016-09-14 11:18:47,765 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Localizer CWD set to /tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1473752235721_0009 = file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1473752235721_0009
2016-09-14 11:20:54,352 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://10.45.10.33:8020/user/root/.sparkStaging/application_1473752235721_0009/__spark_libs__8339309767420855025.zip(->/tmp/hadoop-root/nm-local-dir/usercache/root/filecache/10/__spark_libs__8339309767420855025.zip) transitioned from DOWNLOADING to LOCALIZED
2016-09-14 11:20:55,049 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://10.45.10.33:8020/user/root/.sparkStaging/application_1473752235721_0009/__spark_conf__.zip(->/tmp/hadoop-root/nm-local-dir/usercache/root/filecache/11/__spark_conf__.zip) transitioned from DOWNLOADING to LOCALIZED
2016-09-14 11:20:55,052 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl: Container container_1473752235721_0009_02_000001 transitioned from LOCALIZING to LOCALIZED
2016-09-14 11:20:57,298 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl: Container container_1473752235721_0009_02_000001 transitioned from LOCALIZED to RUNNING
2016-09-14 11:20:57,509 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: launchContainer: [bash, /tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1473752235721_0009/container_1473752235721_0009_02_000001/default_container_executor.sh]
2016-09-14 11:20:58,338 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting resource-monitoring for container_1473752235721_0009_02_000001
2016-09-14 11:21:07,134 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 26593 for container-id container_1473752235721_0009_02_000001: 50.3 MB of 1 GB physical memory used; 2.2 GB of 2.1 GB virtual memory used
2016-09-14 11:21:15,218 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 26593 for container-id container_1473752235721_0009_02_000001: 90.9 MB of 1 GB physical memory used; 2.3 GB of 2.1 GB virtual memory used
2016-09-14 11:21:15,224 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Process tree for container: container_1473752235721_0009_02_000001 has processes older than 1 iteration running over the configured limit. Limit=2254857728, current usage = 2424918016
2016-09-14 11:21:15,412 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Container [pid=26593,containerID=container_1473752235721_0009_02_000001] is running beyond virtual memory limits. Current usage: 90.9 MB of 1 GB physical memory used; 2.3 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1473752235721_0009_02_000001 :
    |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
    |- 26593 26591 26593 26593 (bash) 1 0 115838976 119 /bin/bash -c /usr/java/jdk1.8.0_91/bin/java -server -Xmx512m -Djava.io.tmpdir=/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1473752235721_0009/container_1473752235721_0009_02_000001/tmp -Dspark.yarn.app.container.log.dir=/root/Downloads/hadoop-2.7.2/logs/userlogs/application_1473752235721_0009/container_1473752235721_0009_02_000001 org.apache.spark.deploy.yarn.ExecutorLauncher --arg '10.45.10.33:54976' --properties-file /tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1473752235721_0009/container_1473752235721_0009_02_000001/__spark_conf__/__spark_conf__.properties 1> /root/Downloads/hadoop-2.7.2/logs/userlogs/application_1473752235721_0009/container_1473752235721_0009_02_000001/stdout 2> /root/Downloads/hadoop-2.7.2/logs/userlogs/application_1473752235721_0009/container_1473752235721_0009_02_000001/stderr 
    |- 26597 26593 26593 26593 (java) 811 62 2309079040 23149 /usr/java/jdk1.8.0_91/bin/java -server -Xmx512m -Djava.io.tmpdir=/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1473752235721_0009/container_1473752235721_0009_02_000001/tmp -Dspark.yarn.app.container.log.dir=/root/Downloads/hadoop-2.7.2/logs/userlogs/application_1473752235721_0009/container_1473752235721_0009_02_000001 org.apache.spark.deploy.yarn.ExecutorLauncher --arg 10.45.10.33:54976 --properties-file /tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1473752235721_0009/container_1473752235721_0009_02_000001/__spark_conf__/__spark_conf__.properties 

2016-09-14 11:21:15,451 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Removed ProcessTree with root 26593
2016-09-14 11:21:15,469 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl: Container container_1473752235721_0009_02_000001 transitioned from RUNNING to KILLING
2016-09-14 11:21:15,471 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Cleaning up container container_1473752235721_0009_02_000001
2016-09-14 11:21:15,891 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit code from container container_1473752235721_0009_02_000001 is : 143
2016-09-14 11:21:19,717 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl: Container container_1473752235721_0009_02_000001 transitioned from KILLING to CONTAINER_CLEANEDUP_AFTER_KILL
2016-09-14 11:21:19,797 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting absolute path : /tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1473752235721_0009/container_1473752235721_0009_02_000001
2016-09-14 11:21:19,811 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=root OPERATION=Container Finished - KilleTARGET=ContainerImpl    RESULT=SUCCESS  APPID=application_1473752235721_0009    CONTAINERID=container_1473752235721_0009_02_000001
2016-09-14 11:21:19,813 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl: Container container_1473752235721_0009_02_000001 transitioned from CONTAINER_CLEANEDUP_AFTER_KILL to DONE
2016-09-14 11:21:19,813 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl: Removing container_1473752235721_0009_02_000001 from application application_1473752235721_0009
2016-09-14 11:21:19,813 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_STOP for appId application_1473752235721_0009
2016-09-14 11:21:21,458 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Stopping resource-monitoring for container_1473752235721_0009_02_000001
2016-09-14 11:21:21,531 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed completed containers from NM context: [container_1473752235721_0009_02_000001]
2016-09-14 11:21:21,536 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl: Application application_1473752235721_0009 transitioned from RUNNING to APPLICATION_RESOURCES_CLEANINGUP
2016-09-14 11:21:21,572 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event APPLICATION_STOP for appId application_1473752235721_0009
2016-09-14 11:21:21,585 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl: Application application_1473752235721_0009 transitioned from APPLICATION_RESOURCES_CLEANINGUP to FINISHED
2016-09-14 11:21:21,589 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler: Scheduling Log Deletion for application: application_1473752235721_0009, with delay of 10800 seconds
2016-09-14 11:21:21,592 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting absolute path : /tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1473752235721_0009

如何解决这个问题?谁能给点建议吗?

hadoop apache-spark hdfs
2个回答
2
投票

我收到此错误:“尝试在 AM 注册之前请求执行者!”

并在没有答案的情况下登陆此页面。如果有人有同样的错误,对我来说解决方案是打开 Spark 端口。

在 Spark 3.1.2 版本上,在 Ubuntu 20.04 中运行,您必须在集群中指定一些内容,因此端口不会随机分配:

在spark-defaults.conf中:

spark.driver.bindAddress                10.0.0.1
spark.driver.host                       10.0.0.1
spark.shuffle.service.port              7337
spark.ui.port                           4040
spark.blockManager.port                 31111
spark.driver.blockManager.port          32222
spark.driver.port                       33333

在spark-env.sh中:

SPARK_LOCAL_IP=10.0.0.1
export HADOOP_CONF_DIR=/opt/hadoop/etc/hadoop
export YARN_CONF_DIR=/opt/hadoop/etc/hadoop

在workers中你可以输入数据节点的地址。


0
投票

它帮助我减少了

所需的资源
"spark.driver.memory":
"spark.executor.cores":
"spark.executor.memory":
© www.soinside.com 2019 - 2024. All rights reserved.