AWS EMR在群集模式下使用spark步骤。应用程序_已完成状态失败

问题描述 投票:1回答:3

我正在尝试使用AWS Cli启动集群。我使用以下命令:

aws emr create-cluster --name "Config1" --release-label emr-5.0.0 --applications Name=Spark --use-default-role --log-uri 's3://aws-logs-813591802533-us-west-2/elasticmapreduce/' --instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m1.medium InstanceGroupType=CORE,InstanceCount=2,InstanceType=m1.medium

群集已成功创建。然后我添加这个命令:

aws emr add-steps --cluster-id ID_CLUSTER --region us-west-2 --steps Name=SparkSubmit,Jar="command-runner.jar",Args=[spark-submit,--deploy-mode,cluster,--master,yarn,--executor-memory,1G,--class,Traccia2014,s3://tracceale/params/scalaProgram.jar,s3://tracceale/params/configS3.txt,30,300,2,"s3a://tracceale/Tempi1"],ActionOnFailure=CONTINUE

一段时间后,步骤失败。这是LOG文件:

 17/02/22 11:00:07 INFO RMProxy: Connecting to ResourceManager at ip-172-31-  31-190.us-west-2.compute.internal/172.31.31.190:8032
 17/02/22 11:00:08 INFO Client: Requesting a new application from cluster with 2 NodeManagers
 17/02/22 11:00:08 INFO Client: Verifying our application has not requested  
 Exception in thread "main" org.apache.spark.SparkException: Application application_1487760984275_0001 finished with failed status
at org.apache.spark.deploy.yarn.Client.run(Client.scala:1132)
at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1175)
at org.apache.spark.deploy.yarn.Client.main(Client.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:729)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:185)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:210)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
 17/02/22 11:01:02 INFO ShutdownHookManager: Shutdown hook called
 17/02/22 11:01:02 INFO ShutdownHookManager: Deleting directory /mnt/tmp/spark-27baeaa9-8b3a-4ae6-97d0-abc1d3762c86
 Command exiting with ret '1'

本地(在SandBox Hortonworks HDP 2.5上)我运行:

./spark-submit --class Traccia2014 --master local[*] --executor-memory 2G /usr/hdp/current/spark2-client/ScalaProjects/ScripRapportoBatch2.1/target/scala-2.11/traccia-22-ottobre_2.11-1.0.jar "/home/tracce/configHDFS.txt" 30 300 3

一切正常。我已经阅读了与我的问题相关的内容,但我无法弄明白。

UPDATE

签入Application Master,我收到此错误:

17/02/22 15:29:54 ERROR ApplicationMaster: User class threw exception: java.io.FileNotFoundException: s3:/tracceale/params/configS3.txt (No such file or directory)

at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.<init>(FileInputStream.java:138)
at scala.io.Source$.fromFile(Source.scala:91)
at scala.io.Source$.fromFile(Source.scala:76)
at scala.io.Source$.fromFile(Source.scala:54)
at Traccia2014$.main(Rapporto.scala:40)
at Traccia2014.main(Rapporto.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:627)
 17/02/22 15:29:55 INFO ApplicationMaster: Final app status: FAILED, exitCode: 15, (reason: User class threw exception: java.io.FileNotFoundException: s3:/tracceale/params/configS3.txt (No such file or directory))

我将S3中提到的“s3://tracceale/params/configS3.txt”路径传递给函数'fromFile',如下所示:

for(line <- scala.io.Source.fromFile(logFile).getLines())

我该怎么解决?提前致谢。

apache-spark aws-cli amazon-emr spark-submit
3个回答
0
投票

由于您使用的是集群部署模式,因此您所包含的日志根本没用。他们只是说应用程序失败但不是失败的原因。要弄清楚它失败的原因,至少需要查看Application Master日志,因为这是Spark驱动程序在集群部署模式下运行的地方,它可能会更好地提示应用程序失败的原因。

由于您已使用--log-uri配置了集群,因此您将在s3:// aws-logs-813591802533-us-west-2 / elasticmapreduce / <CLUSTER ID> / containers / <下找到Application Master的日志。 YARN应用程序ID> /其中YARN应用程序ID是(基于您上面包含的日志)application_1487760984275_0001,容器ID应该类似于container_1487760984275_0001_01_000001。 (应用程序的第一个容器是Application Master。)


0
投票

你所拥有的是一个对象存储的URL,可以从Hadoop文件系统API访问,以及来自java.io.File的堆栈跟踪,它无法读取它,因为它不引用本地磁盘中的任何内容。

使用SparkContext.hadoopRDD()作为将路径转换为RDD的操作


0
投票

在该位置存在文件丢失的可能性,可能是您在ssh进入EMR集群后可以看到它,但仍然是步骤命令本身无法弄清楚并开始抛出该文件未找到异常。

在这种情况下,我所做的是:

Step 1: Checked for the file existence in the project directory which we copied to EMR.

for example mine was in `//usr/local/project_folder/`

Step 2: Copy the script which you're expecting to run on the EMR.

for example I copied from `//usr/local/project_folder/script_name.sh` to `/home/hadoop/`

Step 3: Then executed the script from /home/hadoop/ by passing the absolute path to the command-runner.jar

command-runner.jar bash /home/hadoop/script_name.sh

因此我发现我的脚本正在运行。希望这可能对某人有所帮助

© www.soinside.com 2019 - 2024. All rights reserved.