如何使用Jupyter + SparkR和自定义R安装

问题描述 投票:1回答:2

我正在使用Dockerized映像和Jupyter笔记本以及SparkR内核。当我创建SparkR笔记本时,它使用Microsoft R(3.3.2)的安装而不是vanilla CRAN R install(3.2.3)。

我正在使用的Docker镜像安装了一些自定义R库和Python pacakages但我没有明确安装Microsoft R.无论我是否可以删除Microsoft R或并排使用它,我如何获得我的SparkR内核使用R的自定义安装?

提前致谢

r jupyter sparkr
2个回答
1
投票

除了与Docker相关的问题之外,Jupyter内核的设置在名为kernel.json的文件中配置,驻留在特定目录中(每个内核一个),可以使用命令jupyter kernelspec list查看;例如,我的(Linux)机器就是这种情况:

$ jupyter kernelspec list
Available kernels:
  python2       /usr/lib/python2.7/site-packages/ipykernel/resources
  caffe         /usr/local/share/jupyter/kernels/caffe
  ir            /usr/local/share/jupyter/kernels/ir
  pyspark       /usr/local/share/jupyter/kernels/pyspark
  pyspark2      /usr/local/share/jupyter/kernels/pyspark2
  tensorflow    /usr/local/share/jupyter/kernels/tensorflow

再一次,作为一个例子,这里是我的R内核的kernel.json的内容(ir

{
  "argv": ["/usr/lib64/R/bin/R", "--slave", "-e", "IRkernel::main()", "--args", "{connection_file}"],
  "display_name": "R 3.3.2",
  "language": "R"
}

这是我的pyspark2内核的相应文件:

{
 "display_name": "PySpark (Spark 2.0)",
 "language": "python",
 "argv": [
  "/opt/intel/intelpython27/bin/python2",
  "-m",
  "ipykernel",
  "-f",
  "{connection_file}"
 ],
 "env": {
  "SPARK_HOME": "/home/ctsats/spark-2.0.0-bin-hadoop2.6",
  "PYTHONPATH": "/home/ctsats/spark-2.0.0-bin-hadoop2.6/python:/home/ctsats/spark-2.0.0-bin-hadoop2.6/python/lib/py4j-0.10.1-src.zip",
  "PYTHONSTARTUP": "/home/ctsats/spark-2.0.0-bin-hadoop2.6/python/pyspark/shell.py",
  "PYSPARK_PYTHON": "/opt/intel/intelpython27/bin/python2"
 }
}

正如您所看到的,在两种情况下,argv的第一个元素是相应语言的可执行文件 - 在我的例子中,我的ir内核的GNU R和我的pyspark2内核的Intel Python 2.7。更改它,以便它指向您的GNU R可执行文件,应解决您的问题。


0
投票

要使用自定义R环境,我相信您需要在启动Spark时设置以下应用程序属性:

    "spark.r.command": "/custom/path/bin/R",
    "spark.r.driver.command": "/custom/path/bin/Rscript",
    "spark.r.shell.command" : "/custom/path/bin/R"

这里有更完整的记录:https://spark.apache.org/docs/latest/configuration.html#sparkr

© www.soinside.com 2019 - 2024. All rights reserved.