Azure HDInsight Jupyter和pyspark无法正常工作

问题描述 投票:0回答:1

我在azure上使用以下参数创建了一个HDInsight群集:

Spark 2.4 (HDI 4.0)

并且我尝试使用PySpark Jupyter Notebook的HDInsights for Apache Spark教程,并且效果很好。但是自从我第二次重新运行笔记本或启动新笔记本并运行简单]

from pyspark.sql import *

或其他命令,都以]结尾>

The code failed because of a fatal error:
    Session 7 did not start up in 180 seconds..

Some things to try:
a) Make sure Spark has enough available resources for Jupyter to create a Spark context. For instructions on how to assign resources see http://go.microsoft.com/fwlink/?LinkId=717038
b) Contact your cluster administrator to make sure the Spark magics library is configured correctly.

此后,我还尝试了ssh的pyspark。当我通过ssh连接到集群并运行时

$ pyspark

显示以下信息

SPARK_MAJOR_VERSION is set to 2, using Spark2
Python 2.7.12 |Anaconda custom (64-bit)| (default, Jul  2 2016, 17:42:40)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).

并卡在那儿。

我想知道我是否错过任何手术?或者是错误或其他东西。我该如何解决这个问题?

我在azure上使用以下参数创建了一个HDInsight群集:Spark 2.4(HDI 4.0)然后我尝试使用PySpark Jupyter Notebook的Apache Spark HDInsights教程,它工作正常。 ...

azure apache-spark pyspark jupyter-notebook hdinsight
1个回答
0
投票

这看起来像是Spark应用程序资源的错误。检查群集上可用的资源,然后关闭不需要的任何应用程序。

转到Yarn UI

© www.soinside.com 2019 - 2024. All rights reserved.