如何将Spark Streaming与Kafka连接

问题描述 投票:0回答:1
from pyspark import SparkContext
from pyspark.streaming import StreamingContext
from pyspark.streaming.kafka import KafkaUtils    

sc = SparkContext.getOrCreate()
ssc = StreamingContext(sc, 1)
directKafkaStream = KafkaUtils.createDirectStream(ssc, ["topic"], {"metadata.broker.list":"prd-kafka:9092,prd-kafka1:9092,prd-kafka:9092,"})

我试图不连接Spark Streaming,以阅读一些主题并用hdfs编写。但是有问题,请按照下面的提示]]

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/opt/cloudera/parcels/CDH-5.9.3-1.cdh5.9.3.p0.4/lib/spark/python/pyspark/streaming/kafka.py", line 152, in createDirectStream
    raise e
py4j.protocol.Py4JJavaError: An error occurred while calling o73.createDirectStreamWithoutMessageHandler.
: org.apache.spark.SparkException: java.io.EOFException
java.nio.channels.ClosedChannelException
java.io.EOFException
        at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$checkErrors$1.apply(KafkaCluster.scala:366)
        at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$checkErrors$1.apply(KafkaCluster.scala:366)
        at scala.util.Either.fold(Either.scala:97)
        at org.apache.spark.streaming.kafka.KafkaCluster$.checkErrors(KafkaCluster.scala:365)
        at org.apache.spark.streaming.kafka.KafkaUtils$.getFromOffsets(KafkaUtils.scala:222)
        at org.apache.spark.streaming.kafka.KafkaUtilsPythonHelper.createDirectStream(KafkaUtils.scala:720)
        at org.apache.spark.streaming.kafka.KafkaUtilsPythonHelper.createDirectStreamWithoutMessageHandler(KafkaUtils.scala:688)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
        at py4j.Gateway.invoke(Gateway.java:259)
        at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
        at py4j.commands.CallCommand.execute(CallCommand.java:79)
        at py4j.GatewayConnection.run(GatewayConnection.java:209)
        at java.lang.Thread.run(Thread.java:748)

我认为我的联系必须与此相似

format("kafka") \
    .option("kafka.sasl.mechanism", "SCRAM-SHA-256") \
    .option("kafka.security.protocol", "SASL_PLAINTEXT") \
    .option("kafka.sasl.jaas.config", EH_SASL) \
    .option("kafka.batch.size", 5000) \
    .option("kafka.bootstrap.servers", "metadata.broker.list":"prd-kafka:9092,prd-kafka1:9092,prdkafka:9092,") \
    .option("subscribe", "topic") 

有人知道如何使用“ SCHA-SHA-256”机制将Spark Streaming与kafka连接。

从pyspark.py中导入SparkContext,从pyspark.streaming中导入SparkContext。pyspark.streaming.kafka中导入StreamingContext。

使用Spark结构化流。这要容易得多。
apache-spark spark-streaming pyspark-sql
1个回答
0
投票
© www.soinside.com 2019 - 2024. All rights reserved.