使用bitnami Helm Chart安装kafka

问题描述 投票:0回答:1

当我尝试按照 Helm Chart 安装说明连接生产者时遇到身份验证问题:

我正在安装这个kafka helm图表:

https://artifacthub.io/packages/helm/bitnami/kafka?modal=install

我可以用 helm install 完美安装它,这是安装的输出:

NAME: kafka
LAST DEPLOYED: Mon Dec  4 14:55:43 2023
NAMESPACE: develop
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: kafka
CHART VERSION: 26.4.3
APP VERSION: 3.6.0

** Please be patient while the chart is being deployed **

Kafka can be accessed by consumers via port 9092 on the following DNS name from within your cluster:

    kafka.develop.svc.cluster.local

Each Kafka broker can be accessed by producers via port 9092 on the following DNS name(s) from within your cluster:

    kafka-controller-0.kafka-controller-headless.develop.svc.cluster.local:9092
    kafka-controller-1.kafka-controller-headless.develop.svc.cluster.local:9092
    kafka-controller-2.kafka-controller-headless.develop.svc.cluster.local:9092

The CLIENT listener for Kafka client connections from within your cluster have been configured with the following security settings:
    - SASL authentication

To connect a client to your Kafka, you need to create the 'client.properties' configuration files with the content below:

security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-256
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
    username="user1" \
    password="$(kubectl get secret kafka-user-passwords --namespace develop -o jsonpath='{.data.client-passwords}' | base64 -d | cut -d , -f 1)";

To create a pod that you can use as a Kafka client run the following commands:

    kubectl run kafka-client --restart='Never' --image docker.io/bitnami/kafka:3.6.0-debian-11-r2 --namespace develop --command -- sleep infinity
    kubectl cp --namespace develop /path/to/client.properties kafka-client:/tmp/client.properties
    kubectl exec --tty -i kafka-client --namespace develop -- bash

    PRODUCER:
        kafka-console-producer.sh \
            --producer.config /tmp/client.properties \
            --broker-list kafka-controller-0.kafka-controller-headless.develop.svc.cluster.local:9092,kafka-controller-1.kafka-controller-headless.develop.svc.cluster.local:9092,kafka-controller-2.kafka-controller-headless.develop.svc.cluster.local:9092 \
            --topic test

    CONSUMER:
        kafka-console-consumer.sh \
            --consumer.config /tmp/client.properties \
            --bootstrap-server kafka.develop.svc.cluster.local:9092 \
            --topic test \
            --from-beginning

pod 已正确创建并运行:

k 获取豆荚

NAME                                                 READY   STATUS      RESTARTS        AGE
kafka-controller-0                                   2/2     Running     1 (2m42s ago)   3m33s
kafka-controller-2                                   2/2     Running     1 (2m42s ago)   3m33s
kafka-controller-1                                   2/2     Running     1 (2m36s ago)   3m33s

然后我按照输出的说明创建一个 kafka-client,然后连接到它:

k exec --tty -i kafka-client --namespace develop -- bash

我已经复制了描述中提到的clients.properties,您可以看到该文件在那里:

@kafka-client:/$ cat /tmp/client.properties 
security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-256
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
    username="user1" \
    password="$(kubectl get secret kafka-user-passwords --namespace develop -o jsonpath='{.data.client-passwords}' | base64 -d | cut -d , -f 1)";

最后,我尝试创建生产者,但出现身份验证错误:

@kafka-client:/$ kafka-console-producer.sh \
                --producer.config /tmp/client.properties \
                --broker-list kafka-controller-0.kafka-controller-headless.develop.svc.cluster.local:9092,kafka-controller-1.kafka-controller-headless.develop.svc.cluster.local:9092,kafka-controller-2.kafka-controller-headless.develop.svc.cluster.local:9092 \
                --topic test

这是日志:

[2023-12-04 14:06:37,028] ERROR [Producer clientId=console-producer] Connection to node -3 (kafka-controller-2.kafka-controller-headless.develop.svc.cluster.local/10.1.63.93:9092) failed authentication due to: Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-256 (org.apache.kafka.clients.NetworkClient)
[2023-12-04 14:06:37,028] WARN [Producer clientId=console-producer] Bootstrap broker kafka-controller-2.kafka-controller-headless.develop.svc.cluster.local:9092 (id: -3 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
[2023-12-04 14:06:37,571] ERROR [Producer clientId=console-producer] Connection to node -2 (kafka-controller-1.kafka-controller-headless.develop.svc.cluster.local/10.1.63.89:9092) failed authentication due to: Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-256 (org.apache.kafka.clients.NetworkClient)

检查第一个kafka pod中的日志,我可以看到启动时应用的一些配置:

sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [PLAIN, SCRAM-SHA-256, SCRAM-SHA-512]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.connect.timeout.ms = null
sasl.login.read.timeout.ms = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.login.retry.backoff.max.ms = 10000
sasl.login.retry.backoff.ms = 100
sasl.mechanism.controller.protocol = PLAIN
sasl.mechanism.inter.broker.protocol = PLAIN
sasl.oauthbearer.clock.skew.seconds = 30
sasl.oauthbearer.expected.audience = null
sasl.oauthbearer.expected.issuer = null
sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
sasl.oauthbearer.jwks.endpoint.url = null
sasl.oauthbearer.scope.claim.name = scope
sasl.oauthbearer.sub.claim.name = sub
sasl.oauthbearer.token.endpoint.url = null
sasl.server.callback.handler.class = null
sasl.server.max.receive.size = 524288

这是 kafka 日志中有关身份验证失败的具体错误:

[2023-12-04 14:06:41,955] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /10.1.63.92 (channelId=10.1.63.90:9092-10.1.63.92:55104-1) (Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-256) (org.apache.kafka.common.network.Selector)
[2023-12-04 14:07:09,798] INFO [RaftManager id=0] Node 2 disconnected. (org.apache.kafka.clients.NetworkClient)

有什么想法吗?我尝试了一些不同的事情,例如覆盖 sals 配置的一些图表舵变量,但没有任何进展。 我相信这应该按原样工作,因为这个 helm 图表应该已经正确配置为接受 kakfa-clients 以及安装输出中提供的配置..但很明显缺少一些步骤或其他内容。

apache-kafka spring-kafka
1个回答
0
投票

您是否尝试过禁用身份验证?您始终可以在侦听器中处理消息时应用安全规则。

© www.soinside.com 2019 - 2024. All rights reserved.