在 Kubernetes 上使用 Bitnami Helm 安装 Kafka 集群

问题描述 投票:0回答:1

我正在尝试使用 Kafka 的 Bitnami 图表在 Kubernetes 上安装 Kafka 集群(在我的例子中,我使用 minikube 只是为了测试)。

https://github.com/bitnami/charts/tree/main/bitnami/kafka

我通过执行以下操作开始安装:

helm install cluster-kafka bitnami/kafka -f values.yaml

我的values.yaml文件如下

# values.yaml para Kafka en Kubernetes utilizando Helm y el chart de Bitnami en modo KRaft.

# Configuración básica del clúster de Kafka.
replicaCount: 3  # Define el número de réplicas del broker de Kafka para alta disponibilidad.

# Configuración de la imagen de Docker.
image:
  registry: docker.io                 # Registro de Docker desde donde se jala la imagen.
  repository: bitnami/kafka           # Repositorio de la imagen de Kafka.
  tag: 3.6                            # Etiqueta de la imagen a utilizar, 'latest' para la última versión.

# Configuración de autenticación.
auth:
  clientProtocol: plaintext           # Protocolo de comunicación con los clientes, sin cifrado.
  interBrokerProtocol: plaintext      # Protocolo de comunicación entre brokers, sin cifrado.

# Configuración del servicio Kubernetes para exponer Kafka internamente.
service:
  type: ClusterIP                     # Tipo de servicio Kubernetes para exponer Kafka internamente dentro del clúster.

# Configuración para el acceso externo a los brokers de Kafka.
externalAccess:
  enabled: true                       # Habilita el acceso externo.
  controller:
    service:
      type: NodePort 
      nodePorts: [31090, 31091, 31092]  # Especifica puertos NodePort para cada broker.

# Sondas de vida y disponibilidad para los brokers de Kafka.
livenessProbe:
  initialDelaySeconds: 30
  periodSeconds: 15
  timeoutSeconds: 5
  failureThreshold: 6
  successThreshold: 2

readinessProbe:
  initialDelaySeconds: 30
  periodSeconds: 15
  timeoutSeconds: 5
  failureThreshold: 6
  successThreshold: 2

# Configuración de persistencia para almacenar los datos de Kafka.
persistence:
  enabled: true                       # Habilita la persistencia.
  storageClass: "standard"            # Clase de almacenamiento a utilizar.
  accessModes:
    - ReadWriteOnce                   # Modo de acceso al volumen.
  size: 2Gi                           # Tamaño del volumen de almacenamiento.

# Desactivación de Zookeeper, ya que Kraft no lo requiere.
zookeeper:
  enabled: false

# Configuración del modo KRaft.
kraft:
  enabled: true
  clusterId: "vKaEBaltQuqktgAA3wkccA"  # Identificador del clúster de Kraft.
  controllerQuorumVoters: "0@cluster-kafka-controller-0.cluster-kafka-controller-headless.default.svc.cluster.local:9093,1@cluster-kafka-controller-1.cluster-kafka-controller-headless.default.svc.cluster.local:9093,2@cluster-kafka-controller-2.cluster-kafka-controller-headless.default.svc.cluster.local:9093"

# Configuración de listeners.
listeners:
  client:
    name: CLIENT
    containerPort: 9092
    protocol: PLAINTEXT
  controller:
    name: CONTROLLER
    containerPort: 9093
    protocol: PLAINTEXT
  interbroker:
    name: INTERNAL
    containerPort: 9094
    protocol: PLAINTEXT
  external:
    name: EXTERNAL
    containerPort: 9095
    protocol: PLAINTEXT
  advertisedListeners:
    - CLIENT://cluster-kafka-controller-0.cluster-kafka-controller-headless.default.svc.cluster.local:9092,CONTROLLER://cluster-kafka-controller-0.cluster-kafka-controller-headless.default.svc.cluster.local:9093
    - CLIENT://cluster-kafka-controller-1.cluster-kafka-controller-headless.default.svc.cluster.local:9092,CONTROLLER://cluster-kafka-controller-1.cluster-kafka-controller-headless.default.svc.cluster.local:9093
    - CLIENT://cluster-kafka-controller-2.cluster-kafka-controller-headless.default.svc.cluster.local:9092,CONTROLLER://cluster-kafka-controller-2.cluster-kafka-controller-headless.default.svc.cluster.local:9093
  # overrideListeners: "CLIENT://:9092,CONTROLLER://:9093,INTERNAL://:9094,EXTERNAL://:9095"
  securityProtocolMap: "CLIENT:PLAINTEXT,CONTROLLER:PLAINTEXT,INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT"

该文件是我在查看 Kafka 的 Bitnami 图表文档后创建的,因此它很可能有多个错误。 我需要的是建立一个至少有 3 个代理并使用 Kraft 的 Kafka 集群,但我找到的所有示例都是用 Zookeeper 制作的。 一旦我安装了 helm,我就会得到以下输出:

NAME: cluster-kafka
LAST DEPLOYED: Wed Feb 14 21:59:02 2024
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: kafka
CHART VERSION: 26.8.5
APP VERSION: 3.6.1
---------------------------------------------------------------------------------------------
 WARNING

    By specifying "serviceType=LoadBalancer" and not configuring the authentication
    you have most likely exposed the Kafka service externally without any
    authentication mechanism.

    For security reasons, we strongly suggest that you switch to "ClusterIP" or
    "NodePort". As alternative, you can also configure the Kafka authentication.

---------------------------------------------------------------------------------------------

** Please be patient while the chart is being deployed **

Kafka can be accessed by consumers via port 9092 on the following DNS name from within your cluster:

    cluster-kafka.default.svc.cluster.local

Each Kafka broker can be accessed by producers via port 9092 on the following DNS name(s) from within your cluster:

    cluster-kafka-controller-0.cluster-kafka-controller-headless.default.svc.cluster.local:9092
    cluster-kafka-controller-1.cluster-kafka-controller-headless.default.svc.cluster.local:9092
    cluster-kafka-controller-2.cluster-kafka-controller-headless.default.svc.cluster.local:9092

To create a pod that you can use as a Kafka client run the following commands:

    kubectl run cluster-kafka-client --restart='Never' --image docker.io/bitnami/kafka:3.6 --namespace default --command -- sleep infinity
    kubectl exec --tty -i cluster-kafka-client --namespace default -- bash

    PRODUCER:
        kafka-console-producer.sh \
            --broker-list cluster-kafka-controller-0.cluster-kafka-controller-headless.default.svc.cluster.local:9092,cluster-kafka-controller-1.cluster-kafka-controller-headless.default.svc.cluster.local:9092,cluster-kafka-controller-2.cluster-kafka-controller-headless.default.svc.cluster.local:9092 \
            --topic test

    CONSUMER:
        kafka-console-consumer.sh \
            --bootstrap-server cluster-kafka.default.svc.cluster.local:9092 \
            --topic test \
            --from-beginning
To connect to your Kafka controller+broker nodes from outside the cluster, follow these instructions:
    Kafka brokers domain: You can get the external node IP from the Kafka configuration file with the following commands (Check the EXTERNAL listener)

        1. Obtain the pod name:

        kubectl get pods --namespace default -l "app.kubernetes.io/name=kafka,app.kubernetes.io/instance=cluster-kafka,app.kubernetes.io/component=kafka"

        2. Obtain pod configuration:

        kubectl exec -it KAFKA_POD -- cat /opt/bitnami/kafka/config/server.properties | grep advertised.listeners
    Kafka brokers port: You will have a different node port for each Kafka broker. You can get the list of configured node ports using the command below:

        echo "$(kubectl get svc --namespace default -l "app.kubernetes.io/name=kafka,app.kubernetes.io/instance=cluster-kafka,app.kubernetes.io/component=kafka,pod" -o jsonpath='{.items[*].spec.ports[0].nodePort}' | tr ' ' '\n')"
WARNING: Rolling tag detected (bitnami/kafka:3.6), please note that it is strongly recommended to avoid using rolling tags in a production environment.
+info https://docs.bitnami.com/containers/how-to/understand-rolling-tags-containers/

如果我这样做:

kubectl get pods

我得到的是

NAME                         READY   STATUS             RESTARTS        AGE
cluster-kafka-controller-0   0/1     CrashLoopBackOff   8 (4m12s ago)   21m
cluster-kafka-controller-1   0/1     CrashLoopBackOff   8 (4m20s ago)   21m
cluster-kafka-controller-2   0/1     CrashLoopBackOff   8 (4m21s ago)   21m

当我开始查看某些 Pod 的日志时,我看到以下内容:

2024-02-15T01:10:35.689Z | kafka 01:10:35.68 INFO  ==> 
2024-02-15T01:10:35.690Z | kafka 01:10:35.69 INFO  ==> Welcome to the Bitnami kafka container
2024-02-15T01:10:35.691Z | kafka 01:10:35.69 INFO  ==> Subscribe to project updates by watching https://github.com/bitnami/containers
2024-02-15T01:10:35.692Z | kafka 01:10:35.69 INFO  ==> Submit issues and feature requests at https://github.com/bitnami/containers/issues
2024-02-15T01:10:35.693Z | kafka 01:10:35.69 INFO  ==> 
2024-02-15T01:10:35.694Z | kafka 01:10:35.69 INFO  ==> ** Starting Kafka setup **
2024-02-15T01:10:35.742Z | kafka 01:10:35.74 INFO  ==> Initializing KRaft storage metadata
2024-02-15T01:10:35.745Z | kafka 01:10:35.74 INFO  ==> Formatting storage directories to add metadata...
2024-02-15T01:10:37.269Z | Exception in thread "main" java.lang.IllegalArgumentException: Error creating broker listeners from '[CLIENT://cluster-kafka-controller-0.cluster-kafka-controller-headless.default.svc.cluster.local:9092,CONTROLLER://cluster-kafka-controller-0.cluster-kafka-controller-headless.default.svc.cluster.local:9093 CLIENT://cluster-kafka-controller-1.cluster-kafka-controller-headless.default.svc.cluster.local:9092,CONTROLLER://cluster-kafka-controller-1.cluster-kafka-controller-headless.default.svc.cluster.local:9093 CLIENT://cluster-kafka-controller-2.cluster-kafka-controller-headless.default.svc.cluster.local:9092,CONTROLLER://cluster-kafka-controller-2.cluster-kafka-controller-headless.default.svc.cluster.local:9093]': No security protocol defined for listener [CLIENT
2024-02-15T01:10:37.269Z |  at kafka.utils.CoreUtils$.listenerListToEndPoints(CoreUtils.scala:266)
2024-02-15T01:10:37.269Z |  at kafka.server.KafkaConfig.effectiveAdvertisedListeners(KafkaConfig.scala:2154)
2024-02-15T01:10:37.269Z |  at kafka.server.KafkaConfig.validateValues(KafkaConfig.scala:2275)
2024-02-15T01:10:37.269Z |  at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:2233)
2024-02-15T01:10:37.269Z |  at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1603)
2024-02-15T01:10:37.269Z |  at kafka.tools.StorageTool$.$anonfun$main$1(StorageTool.scala:50)
2024-02-15T01:10:37.269Z |  at scala.Option.flatMap(Option.scala:271)
2024-02-15T01:10:37.270Z |  at kafka.tools.StorageTool$.main(StorageTool.scala:50)
2024-02-15T01:10:37.270Z |  at kafka.tools.StorageTool.main(StorageTool.scala)
2024-02-15T01:10:37.270Z | Caused by: java.lang.IllegalArgumentException: No security protocol defined for listener [CLIENT
2024-02-15T01:10:37.270Z |  at kafka.cluster.EndPoint$.$anonfun$createEndPoint$2(EndPoint.scala:49)
2024-02-15T01:10:37.270Z |  at scala.collection.immutable.Map$Map4.getOrElse(Map.scala:450)
2024-02-15T01:10:37.270Z |  at kafka.cluster.EndPoint$.securityProtocol$1(EndPoint.scala:49)
2024-02-15T01:10:37.270Z |  at kafka.cluster.EndPoint$.createEndPoint(EndPoint.scala:57)
2024-02-15T01:10:37.270Z |  at kafka.utils.CoreUtils$.$anonfun$listenerListToEndPoints$10(CoreUtils.scala:263)
2024-02-15T01:10:37.270Z |  at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286)
2024-02-15T01:10:37.270Z |  at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
2024-02-15T01:10:37.270Z |  at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
2024-02-15T01:10:37.270Z |  at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:38)
2024-02-15T01:10:37.270Z |  at scala.collection.TraversableLike.map(TraversableLike.scala:286)
2024-02-15T01:10:37.270Z |  at scala.collection.TraversableLike.map$(TraversableLike.scala:279)
2024-02-15T01:10:37.270Z |  at scala.collection.AbstractTraversable.map(Traversable.scala:108)
2024-02-15T01:10:37.270Z |  at kafka.utils.CoreUtils$.listenerListToEndPoints(CoreUtils.scala:263)
2024-02-15T01:10:37.270Z |  ... 8 more

我收到错误:

没有为侦听器定义安全协议

我尝试了所有我能想到的配置,检查了要添加到values.yaml 文件中的可能参数,但我仍然遇到相同类型的错误。

可能是什么问题?为什么会发生这种情况?还有其他我需要解决的单独问题吗?

现在我不关心安全性,我只需要 PLAINTEXT 因为它是上传到本地服务器进行测试,我希望在开始处理安全和 ACL 问题之前有一个工作且功能齐全的集群。

我尝试更改侦听器配置,使用 overrideListeners 而不是客户端、控制器等,正如您在我上传的文件中看到的那样(这就是对其进行注释的原因)。 我需要的是以某种方式配置侦听器的安全协议,以便我可以设置集群并开始测试 Pod 之间的连接,或者从 Kubernetes 集群外部的服务器连接到主题。

感谢您的帮助

kubernetes apache-kafka kubernetes-helm bitnami bitnami-kafka
1个回答
0
投票

如果你定义自己的广告监听器,它必须是一个字符串,而不是一个列表

如果未定义,它将根据侦听器映射构建

来源 - https://github.com/bitnami/charts/blob/main/bitnami/kafka/templates/_helpers.tpl#L517

© www.soinside.com 2019 - 2024. All rights reserved.