如何在附加模式下写入流

问题描述 投票:1回答:1

[不带水印的流数据框架/数据集上有流聚合时,不支持获取错误输出模式。我想将输出放到控制台上。

class StructSpark:
  def __init__(self, address, port):
    self.address = address
    self.port = port
    self.spark = SparkSession.builder.appName("StructuredWordcount").getOrCreate()
def getonline(self):
    lines = self.spark.readStream.format('socket').option('host', self.address).option('port', self.port).option(
        'includeTimestamp', 'true').load()
    words = lines.select(split(lines.value, ',').alias("value"), lines.timestamp)
    words1 = words.select((split(words.value[0], ',')).alias("key"),(split(words.value[0], ',')).alias("value"), lines.timestamp)
    windowedCount = words1.withWatermark("timestamp", "10 minutes").groupBy(window(words1.timestamp, "5 minutes", "5 minutes"),words1.key).count()
    windowedCount.createOrReplaceTempView("updates")
    count = self.spark.sql("select * from updates where count > 1")
    with open('/home/vaibhav/Desktop/data.txt', 'a') as file:
        file.write(str(count))
    query = count.writeStream.outputMode("Append").format("console").start()
    query.awaitTermination()
apache-spark streaming
1个回答
0
投票

由于您正在dstream中执行聚合操作,因此无法在附加模式下执行write.stream。在'Complete'模式下使用它或在聚合操作之前执行write.stream。

© www.soinside.com 2019 - 2024. All rights reserved.