Py4JJavaError 调用 javalangNoSuchMethodError 时发生错误 org.apache.spark.sql.AnalysisException org.apache.spark.sql.kafka.KafkaWriter

问题描述 投票:0回答:1

我无法从 Spark 写入 Kafka,Spark 正在读取但未写入,如果我写入控制台,它不会给出错误

Traceback (most recent call last):
File "f:\Sistema de Informação\TCC\Projeto\Novo python\producer\spark copy 3.py", line 55, in 
<module>
.save()

File "C:\Users\ingri\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\pyspark\sql\readwriter.py", line 1461, in save
self._jwrite.save()

File "C:\Users\ingri\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\py4j\java_gateway.py", line 1322, in __call__    return_value = get_return_value(

File "C:\Users\ingri\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\pyspark\errors\exceptions\captured.py", line 
179, in deco
return f(a, kw)

File "C:\Users\ingri\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\py4j\protocol.py", line 326, in get_return_value
raise Py4JJavaError(
py4j.protocol.Py4JJavaError: An error occurred while calling o63.save.
: java.lang.NoSuchMethodError: 'scala.Option org.apache.spark.sql.AnalysisException$.$lessinit$greater$default$6()'at org.apache.spark.sql.kafka010.KafkaWriter$.validateQuery(KafkaWriter.scala:59)       at org.apache.spark.sql.kafka010.KafkaWriter$.write(KafkaWriter.scala:69)at org.apache.spark.sql.kafka010.KafkaSourceProvider.createRelation(KafkaSourceProvider.scala:183)at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:48)at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:75)at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:73)at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:84)at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:107)at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:125)at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:201)at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:108)at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:900)at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:66)at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:107)at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:98)at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:461)at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:76)       at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:461)at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logicaaat org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:32)at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:32)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:437)       
at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:98) at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:85) at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:83)
at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:142)
at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:859)
at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:388)       
at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:361)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:248)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)       at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:75)
       java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:52)at java.base/java.lang.reflect.Method.invoke(Method.java:580)at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:374)at py4j.Gateway.invoke(Gateway.java:282)at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)at py4j.commands.CallCommand.execute(CallCommand.java:79)at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)at py4j.ClientServerConnection.run(ClientServerConnection.java:106)at java.base/java.lang.Thread.run(Thread.java:1583)

PS F:\Sistema de Informação\TCC\Projeto\Novo python> 24/04/04 16:28:05 ERROR ShutdownHookManager: Exception while deleting Spark temp dir: C:\Users\ingri\AppData\Local\Temp\spark-726764a6-4e83-4ffd-87be-3834b755da58
java.io.IOException: Failed to delete: C:\Users\ingri\AppData\Local\Temp\spark-726764a6-4e83-4ffd-87be-3834b755da58\userFiles-211c5d3c-a155-4803-acd0-a7ea0cc25ad0\org.tukaani_xz-1.9.jar       at org.apache.spark.network.util.JavaUtils.deleteRecursivelyUsingJavaIO(JavaUtils.java:14at org.apache.spark.network.util.JavaUtils.deleteRecursively(JavaUtils.java:117)     at org.apache.spark.network.util.JavaUtils.deleteRecursivelyUsingJavaIO(JavaUtils.java:1 at org.apache.spark.network.util.JavaUtils.deleteRecursively(JavaUtils.java:117)        at org.apache.spark.network.util.JavaUtils.deleteRecursivelyUsingJavaIO(JavaUtils.java:129)at org.apache.spark.network.util.JavaUtils.deleteRecursively(JavaUtils.java:117)        at org.apache.spark.network.util.JavaUtils.deleteRecursively(JavaUtils.java:90)at org.apache.spark.util.SparkFileUtils.deleteRecursively(SparkFileUtils.scala:121)     at org.apache.spark.util.SparkFileUtils.deleteRecursively$(SparkFileUtils.scala:120)    at org.apache.spark.util.Utils$.deleteRecursively(Utils.scala:1126)at org.apache.spark.util.ShutdownHookManager$.$anonfun$new$4(ShutdownHookManager.scala:65)at org.apache.spark.util.ShutdownHookManager$.$anonfun$new$4$adapted(ShutdownHookManager.scala:62) at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33) at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)at org.apache.spark.util.ShutdownHookManager$.$anonfun$new$2(ShutdownHookManager.scala:62)at org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:214)at org.apache.spark.util.SparkShutdownHookManager.$anonfun$runAll$2(ShutdownHookManager.scala:188)at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1928)at org.apache.spark.util.SparkShutdownHookManager.$anonfun$runAll$1(ShutdownHookManager.scala:188)at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)at scala.util.Try$.apply(Try.scala:213)at org.apache.spark.util.SparkShutdownHookManager.runAll(ShutdownHookManager.scala:188) at org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:178)at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572)    at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317)at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144),at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)at java.base/java.lang.Thread.run(Thread.java:1583) 

ÊXITO: o processo com PID 14860 (processo filho de PID 568) foi finalizado.
ÊXITO: o processo com PID 568 (processo filho de PID 15708) foi finalizado.
ÊXITO: o processo com PID 15708 (processo filho de PID 18308) foi finalizado.
apache-spark pyspark apache-kafka spark-kafka-integration
1个回答
0
投票

按照我正在运行的代码进行操作 [1]:https://i.stack.imgur.com/d2Pk3.png

最新问题
© www.soinside.com 2019 - 2024. All rights reserved.