卡夫卡制片人挂起发送

问题描述 投票:0回答:1

逻辑是,从自定义源获取数据的流式传输作业必须同时写入Kafka和HDFS。

我写了一个(非常)基本的Kafka制作人来做这个,但整个流媒体工作都挂在send方法上。

class KafkaProducer(val kafkaBootstrapServers: String, val kafkaTopic: String, val sslCertificatePath: String, val sslCertificatePassword: String) {

  val kafkaProps: Properties = new Properties()
  kafkaProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaBootstrapServers)
  kafkaProps.put("acks", "1")
  kafkaProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer")
  kafkaProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer")
  kafkaProps.put("ssl.truststore.location", sslCertificatePath)
  kafkaProps.put("ssl.truststore.password", sslCertificatePassword)

  val kafkaProducer: KafkaProducer[Long, Array[String]] = new KafkaProducer(kafkaProps)

  def sendKafkaMessage(message: Message): Unit = {
    message.data.foreach(list => {
      val producerRecord: ProducerRecord[Long, Array[String]] = new ProducerRecord[Long, Array[String]](kafkaTopic, message.timeStamp.getTime, list.toArray)
      kafkaProducer.send(producerRecord)
    })
  }
}

以及调用生产者的代码:

receiverStream.foreachRDD(rdd => {
      val messageRowRDD: RDD[Row] = rdd.mapPartitions(partition => {
        val parser: Parser = new Parser
        val kafkaProducer: KafkaProducer = new KafkaProducer(kafkaBootstrapServers, kafkaTopic, kafkaSslCertificatePath, kafkaSslCertificatePass)
        val newPartition = partition.map(message => {
          Logger.getLogger("importer").error("Writing Message to Kafka...")
          kafkaProducer.sendKafkaMessage(message)
          Logger.getLogger("importer").error("Finished writing Message to Kafka")
          Message.data.map(singleMessage => parser.parseMessage(Message.timeStamp.getTime, singleMessage))
        })
        newPartition.flatten
      })

      val df = sqlContext.createDataFrame(messageRowRDD, Schema.messageSchema)

      Logger.getLogger("importer").info("Entries-count: " + df.count())
      val row = Try(df.first)

      row match {
        case Success(s) => Persister.writeDataframeToDisk(df, outputFolder)
        case Failure(e) => Logger.getLogger("importer").warn("Resulting DataFrame is empty. Nothing can be written")
      }
    })

从日志中我可以看出每个执行者都到达了“发送到kafka”点,但不是更进一步。所有执行程序都挂起,并且不会抛出任何异常。

Message类是一个非常简单的case类,包含2个字段,一个时间戳和一个字符串数组。

scala hadoop apache-spark apache-kafka kafka-producer-api
1个回答
0
投票

这是由于卡夫卡的acks领域。

Acks被设置为1并且发送速度更快。

© www.soinside.com 2019 - 2024. All rights reserved.