火花作业的外部依赖

问题描述 投票:0回答:2

我是大数据技术的新手。我必须在EMR上以集群模式运行spark工作。这个工作是用python编写的,它依赖于几个库和一些其他工具。我已经编写了脚本并在本地客户端模式下运行它。但是当我尝试使用yarn运行它时,它会产生一些依赖性问题。如何管理这些依赖项?

记录:

"/mnt/yarn/usercache/hadoop/appcache/application_1511680510570_0144/container_1511680510570_0144_01_000002/pyspark.zip/pyspark/cloudpickle.py", line 711, in subimport
    __import__(name)
ImportError: ('No module named boto3', <function subimport at 0x7f8c3c4f9c80>, ('boto3',))

        at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:193)
        at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:234)
        at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:152)
        at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:108)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
pyspark yarn emr
2个回答
1
投票

看来你还没有安装Boto 3库。下载兼容的并使用下面的安装

$ pip install boto3

或者python -m pip install --user boto3

希望这会有所帮助。你可以参考链接-https://github.com/boto/boto3

然后,似乎您没有在所有执行程序(节点)上安装启动3。因为,你正在运行spark,python代码部分在驱动程序和执行程序上运行。如果它的纱线,你需要在所有节点中安装库。

要安装它。请参考-How to bootstrap installation of Python modules on Amazon EMR?


0
投票

是的你可以-

aws emr create-cluster --bootstrap-actions Path=<>,Name=BootstrapAction1,Args=[arg1,arg2].. --auto-terminate。请参考下面-http://docs.aws.amazon.com/ElasticMapReduce/latest/ManagementGuide/emr-plan-bootstrap.html#bootstrapUses

© www.soinside.com 2019 - 2024. All rights reserved.