带有spark-shell命令的问题:java.io.IOException:无法创建FileClient

问题描述 投票:0回答:1

我在执行spark-shell命令时遇到一些问题:

[mapr@node1 ~]$ /opt/mapr/spark/spark-2.1.0/bin/spark-shell --master local[2]

错误:

20/05/02 14:21:34 ERROR SparkContext: Error initializing SparkContext.
java.io.IOException: Could not create FileClient
    at com.mapr.fs.MapRFileSystem.lookupClient(MapRFileSystem.java:643)
    at com.mapr.fs.MapRFileSystem.lookupClient(MapRFileSystem.java:696)
    at com.mapr.fs.MapRFileSystem.getMapRFileStatus(MapRFileSystem.java:1405)
    at com.mapr.fs.MapRFileSystem.getFileStatus(MapRFileSystem.java:1080)
    at org.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:93)
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:531)
    at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2313)
    at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:868)
    at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:860)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:860)
    at org.apache.spark.repl.Main$.createSparkSession(Main.scala:95)
    at $line3.$read$$iw$$iw.<init>(<console>:15)
    at $line3.$read$$iw.<init>(<console>:42)
    at $line3.$read.<init>(<console>:44)
    at $line3.$read$.<init>(<console>:48)
    at $line3.$read$.<clinit>(<console>)
    at $line3.$eval$.$print$lzycompute(<console>:7)
    at $line3.$eval$.$print(<console>:6)
    at $line3.$eval.$print(<console>)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:786)
    at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1047)
    at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:638)
    at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:637)
    at scala.reflect.internal.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:31)
    at scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:19)
    at scala.tools.nsc.interpreter.IMain$WrappedRequest.loadAndRunReq(IMain.scala:637)
    at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:569)
    at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:565)
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:807)
    at scala.tools.nsc.interpreter.ILoop.command(ILoop.scala:681)
    at scala.tools.nsc.interpreter.ILoop.processLine(ILoop.scala:395)
    at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply$mcV$sp(SparkILoop.scala:38)
    at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:37)
    at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:37)
    at scala.tools.nsc.interpreter.IMain.beQuietDuring(IMain.scala:214)
    at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:37)
    at org.apache.spark.repl.SparkILoop.loadFiles(SparkILoop.scala:105)
    at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply$mcZ$sp(ILoop.scala:920)
    at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909)
    at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909)
    at scala.reflect.internal.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:97)
    at scala.tools.nsc.interpreter.ILoop.process(ILoop.scala:909)
    at org.apache.spark.repl.Main$.doMain(Main.scala:68)
    at org.apache.spark.repl.Main$.main(Main.scala:51)
    at org.apache.spark.repl.Main.main(Main.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:733)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:177)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:202)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:116)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.io.IOException: Could not create FileClient
    at com.mapr.fs.MapRClientImpl.<init>(MapRClientImpl.java:136)
    at com.mapr.fs.MapRFileSystem.lookupClient(MapRFileSystem.java:637)
    ... 58 more
java.io.IOException: Could not create FileClient
  at com.mapr.fs.MapRFileSystem.lookupClient(MapRFileSystem.java:643)
  at com.mapr.fs.MapRFileSystem.lookupClient(MapRFileSystem.java:696)
  at com.mapr.fs.MapRFileSystem.getMapRFileStatus(MapRFileSystem.java:1405)
  at com.mapr.fs.MapRFileSystem.getFileStatus(MapRFileSystem.java:1080)
  at org.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:93)
  at org.apache.spark.SparkContext.<init>(SparkContext.scala:531)
  at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2313)
  at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:868)
  at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:860)
  at scala.Option.getOrElse(Option.scala:121)
  at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:860)
  at org.apache.spark.repl.Main$.createSparkSession(Main.scala:95)
  ... 47 elided
Caused by: java.io.IOException: Could not create FileClient
  at com.mapr.fs.MapRClientImpl.<init>(MapRClientImpl.java:136)
  at com.mapr.fs.MapRFileSystem.lookupClient(MapRFileSystem.java:637)
  ... 58 more
<console>:14: error: not found: value spark
       import spark.implicits._
              ^
<console>:14: error: not found: value spark
       import spark.sql
              ^
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.1.0-mapr-1710
      /_/

Using Scala version 2.11.8 (OpenJDK 64-Bit Server VM, Java 1.8.0_242)
Type in expressions to have them evaluated.
Type :help for more information.

scala> 2020-05-02 14:21:27,8616 ERROR Cidcache fs/client/fileclient/cc/cidcache.cc:2470 Thread: 30539 MoveToNextCldb: No CLDB entries, cannot run, sleeping 5 seconds!
2020-05-02 14:21:32,9268 ERROR Client fs/client/fileclient/cc/client.cc:1329 Thread: 30539 Failed to initialize client for cluster MyCluster, error Connection reset by peer(104)

我使用Spark-2.1.0和具有3个节点的集群MapR。

我也有以下conf文件:从属节点列表:

# A Spark Worker will be started on each of the machines listed below.
node2
node3

也将以下几行添加到$Spark_HOME/conf/spark-env.sh

export SPARK_MASTER_HOST=node1
export SPARK_MASTER_IP=172.17.0.2
export SPARK_DIST_CLASSPATH=$(hadoop classpath)

请任何人遇到相同的问题或知道如何解决。

apache-spark centos7 mapr
1个回答
0
投票

似乎是典型的安装问题。在我发现的Mapr论坛中,请查看spark-on-yarn-errors-他们建议在链接中安装命令以及运行configure.sh

要点是(source mapr forums)...这是解决方案:


  -pip uninstall toree

  -pip install --pre toree

  -jupyter toree install --interpreters=Scala,PySparK,SparK,SQL --spark_home=$SPARK_HOME

此外,Spark文档说与Spark 2.1.0兼容的版本是scala 2.11.8

Spark runs on Java 7+, Python 2.6+/3.4+ and R 3.1+. For the Scala API, Spark 2.1.0 uses Scala 2.11. You will need to use a compatible Scala version (2.11.x).

希望您没有错过这些版本。

© www.soinside.com 2019 - 2024. All rights reserved.