Spark 异常:没有凭证范围

问题描述 投票:0回答:1

我是 Databricks 新手,正在尝试从我的通用计算集群连接到 Rstudio Server。

以下是集群配置:

政策:个人计算

访问模式:单用户

Databricks 运行时版本:

13.2 ML(包括 Apache Spark 3.4.0、Scala 2.12)

我们的工作区中还配置了统一目录。

按照此处的说明,我尝试使用 sparlyr 和 SparkR 运行代码。

sparklyr

> library(sparklyr) > sc <- spark_connect(method = "databricks")
但是,我收到以下错误:

Error in value[[3L]](cond) : Failed to start sparklyr backend: java.util.concurrent.ExecutionException: org.apache.spark.SparkException: There is no Credential Scope. at com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:299) at com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:286) at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116) at com.google.common.util.concurrent.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:135) at com.google.common.cache.LocalCache$Segment.getAndRecordStats(LocalCache.java:2344) at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2316) at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2278) at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2193) at com.google.common.cache.LocalCache.get(LocalCache.java:3932) at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3936) at com.google.common.cache.LocIn addition: Warning messages:1: In file.create(to[okay]) : cannot create file '/usr/local/lib/R/site-library/sparklyr/java//sparklyr-2.2-2.11.jar', reason 'Permission denied'2: In file.create(to[okay]) : cannot create file '/usr/local/lib/R/site-library/sparklyr/java//sparklyr-2.1-2.11.jar', reason 'Permission denied'

火花

> library(SparkR) > sparkR.session() Java ref type org.apache.spark.sql.SparkSession id 1 > df <- SparkR::sql("SELECT * FROM default.diamonds LIMIT 2")
错误回溯:

Error in handleErrors(returnStatus, conn) : org.apache.spark.sql.AnalysisException: There is no Credential Scope. ; line 1 pos 14 at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:69) at org.apache.spark.sql.execution.datasources.ResolveSQLOnFile$$anonfun$apply$1.applyOrElse(rules.scala:172) at org.apache.spark.sql.execution.datasources.ResolveSQLOnFile$$anonfun$apply$1.applyOrElse(rules.scala:94) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDownWithPruning$2(AnalysisHelper.scala:219) at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:106) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDownWithPruning$1(AnalysisHelper.scala:219) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:372) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDownWithPruning(AnalysisHelper.scal
我不是 R 开发人员,因此无法真正尝试不同的配置。我尝试设置个人身份验证令牌,但没有成功。

如有任何帮助,我们将不胜感激,并提前致谢:)

apache-spark databricks spark-ar-studio databricks-unity-catalog
1个回答
0
投票
您可能需要从“单用户”访问模式切换到“无隔离共享”才能消除错误

© www.soinside.com 2019 - 2024. All rights reserved.