如何通过Alpakka创建多个分区

问题描述 投票:0回答:2

我正在尝试创建一个简单的生产者,它创建一个由配置提供的一些分区的主题。

根据Alpakka Producer Setting Docorg.apache.kafka.clients.producer.ProducerConfig的任何财产都可以在kafka-clients部分设置。并且,在num.partitions中有一个Producer API Doc属性评论。

因此,我将该属性添加到我的application.conf文件中,如下所示:

topic = "topic"
topic = ${?TOPIC}

# Properties for akka.kafka.ProducerSettings can be
# defined in this section or a configuration section with
# the same layout.
akka.kafka.producer {
  # Tuning parameter of how many sends that can run in parallel.
  parallelism = 100
  parallelism = ${?PARALLELISM}

  # Duration to wait for `KafkaConsumer.close` to finish.
  close-timeout = 20s

  # Fully qualified config path which holds the dispatcher configuration
  # to be used by the producer stages. Some blocking may occur.
  # When this value is empty, the dispatcher configured for the stream
  # will be used.
  use-dispatcher = "akka.kafka.default-dispatcher"

  # The time interval to commit a transaction when using the `Transactional.sink` or `Transactional.flow`
  eos-commit-interval = 100ms

  # Properties defined by org.apache.kafka.clients.producer.ProducerConfig
  # can be defined in this configuration section.
  kafka-clients {
    bootstrap.servers = "my-kafka:9092"
    bootstrap.servers = ${?BOOTSTRAPSERVERS}
    num.partitions = "3"
    num.partitions = ${?NUM_PARTITIONS}
  }
}

生产者应用程序代码如下:

object Main extends App {

  val config = ConfigFactory.load()

  implicit val system: ActorSystem = ActorSystem("producer")
  implicit val materializer: Materializer = ActorMaterializer()

  val producerConfigs = config.getConfig("akka.kafka.producer")
  val producerSettings = ProducerSettings(producerConfigs, new StringSerializer, new StringSerializer)

  val topic = config.getString("topic")

  val done: Future[Done] =
    Source(1 to 100000)
      .map(_.toString)
      .map(value => new ProducerRecord[String, String](topic, value))
      .runWith(Producer.plainSink(producerSettings))

  implicit val ec: ExecutionContextExecutor = system.dispatcher
  done onComplete  {
    case Success(_) => println("Done"); system.terminate()
    case Failure(err) => println(err.toString); system.terminate()
  }

}

但是,这不起作用。生产者使用单个分区而不是3个分区创建主题,因为我已按配置设置:

num.partitions = "3"

最后,Kafkacat输出如下:

~$ kafkacat -b my-kafka:9092 -L
Metadata for all topics (from broker -1: my-kafka:9092/bootstrap):
 3 brokers:
  broker 2 at my-kafka-2.my-kafka-headless.default:9092
  broker 1 at my-kafka-1.my-kafka-headless.default:9092
  broker 0 at my-kafka-0.my-kafka-headless.default:9092
 1 topics:
  topic "topic" with 1 partitions:
    partition 0, leader 2, replicas: 2, isrs: 2

怎么了?是否可以使用Alpakka在kafka-clients部分中设置Kafka Producer API的属性?

apache-kafka kafka-producer-api alpakka
2个回答
2
投票

#由org.apache.kafka.clients.producer.ProducerConfig定义的属性

#可以在此配置部分中定义。

正如这说的那样,ProducerConfig用于生产者设置,而不是代理设置,这就是num.partitions(我认为你在Apache Kafka文档中显示属性的哪个表中丢失了...滚动到它的顶部以查看正确的头)。

无法从生产者设置主题的分区...您需要使用AdminClient类来创建主题,并且分区数是那里的参数,而不是配置属性。

示例代码

val props = new Properties()
props.setProperty(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092")

val adminClient = AdminClient.create(props)

val numPartitions = 3
val replicationFactor = 3.toShort
val newTopic = new NewTopic("new-topic-name", numPartitions, replicationFactor)
val configs = Map(TopicConfig.COMPRESSION_TYPE_CONFIG -> "gzip")
// settings some configs
newTopic.configs(configs.asJava)

adminClient.createTopics(List(newTopic).asJavaCollection)

然后你可以启动制作人


1
投票

似乎主题是默认创建,这是Kafka的默认行为。如果是这种情况,则需要在代理的server.properties文件中定义默认的分区数。

# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=3
© www.soinside.com 2019 - 2024. All rights reserved.