为什么我的kafka消费者在我设置SASL_PLAINTEXT后没有显示任何消息

问题描述 投票:0回答:1

所以我安装了kafka服务器,并在我简单地测试它们时确认它们可以工作

然后我必须为kafka设置SASL_PLAINTEXT 所以我按照这里

的指示进行操作

基本上创造了

  1. zookeeper_jaas.conf

  2. kafka_jaas.conf

然后添加配置到

  1. zookeeper.properties
zookeeper.sasl.client=true
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
requireClientAuthScheme=sasl
  1. 服务器属性
super.users=User:admin
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=PLAIN
authorizer.class.name=kafka.security.authorizer.AclAuthorizer
allow.everyone.if.no.acl.found=true
listeners=SASL_PLAINTEXT://my_ip:9092
advertised.listeners=SASL_PLAINTEXT://my_ip:9092

然后在其 ...-start.sh

 中相应地添加行 
export KAFKA_OPTS="-Djava.security.auth.login.config=file:$base_dir/../config/zookeeper_jaas.conf"
"kafka_jaas.conf"

然后在启动zookeeper和kafka之后,我尝试了

但是消费者什么也没显示

这是consumer.properties

我错过了什么?

kafka:版本3.5.0 动物园管理员:版本3.6.4 ubuntu:Linux 5.15.0-82-generic

apache-kafka sasl
1个回答
0
投票

我不太确定我做了什么,但我想我只是用不同的方法重新安装东西,它就有效了

这就是我所做的:

  1. 安装
curl "https://archive.apache.org/dist/kafka/2.1.0/kafka_2.12-2.1.0.tgz" -o ~/Downloads/kafka2.tgz
mkdir kafka2
cd kafka2
tar -xvzf ~/Downloads/kafka2.tgz --strip 1
  1. zookeeper服务配置(zookeeper.service)
[Unit]
Requires=network.target remote-fs.target
After=network.target remote-fs.target

[Service]
Type=simple
User=kafka
ExecStart=/home/kafka/kafka/bin/zookeeper-server-start.sh /home/kafka/kafka/config/zookeeper.properties
ExecStop=/home/kafka/kafka/bin/zookeeper-server-stop.sh
Restart=on-abnormal

[Install]
WantedBy=multi-user.target
  1. kafka服务配置(kafka.service)
[Unit]
Requires=zookeeper.service
After=zookeeper.service

[Service]
Type=simple
User=kafka
ExecStart=/bin/sh -c '/home/kafka/kafka2/bin/kafka-server-start.sh /home/kafka/kafka2/config/server.properties'
ExecStop=/home/kafka/kafka2/bin/kafka-server-stop.sh
Restart=on-abnormal

[Install]
WantedBy=multi-user.target

然后我只是将其作为服务启动和停止

sudo service zookeeper start
sudo service kafka start
sudo service kafka status
sudo service kafka stop

我的测试配置

我的 Kafka 在我的虚拟机中,我需要从虚拟机外部连接到它

配置文件:

kafka2/config/server.properties

listeners={auth mechanism}://0.0.0.0:9092
advertised.listeners={auth mechanism}://myvm:9092
...
security.inter.broker.protocol={auth mechanism}

选项(授权机制)

  • SASL_SSL(SASL + TLS/SSL)
  • SASL_PLAINTEXT(仅限 SASL)
  • SSL(仅限 TLS/SSL)
  • 明文

附加配置

  1. 使用 TLS/SSL
ssl.truststore.location=/home/kafka/ssl/kafka.broker0.truststore.jks
ssl.truststore.password=password
ssl.keystore.location=/home/kafka/ssl/kafka.broker0.keystore.jks
ssl.keystore.password=password
ssl.key.password=password
ssl.enabled.protocol=TLSv1.2,TLSv1.1,TLSv1
ssl.keystore.type=JKS
ssl.truststore.type=JKS
  1. SASL
sasl.enabled.mechanisms={sasl mechanism}
sasl.mechanism.inter.broker.protocol={sasl mechanism}

选项(sasl机制)

  • 普通
# add to config file
listener.name.sasl_ssl.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
   username="admin" \
   password="admin007" \
   user_admin="admin007";
  • SCRAM-SHA-256
./kafka-configs.sh --alter --add-config 'SCRAM-SHA-256=[password=admin007],SCRAM-SHA-512=[password=admin007]' --entity-type users --entity-name admin --zookeeper localhost:2181
# add to config file
listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
   username="admin" \
   password="admin007";
  • SCRAM-SHA-512
./kafka-configs.sh --alter --add-config 'SCRAM-SHA-256=[password=admin007],SCRAM-SHA-512=[password=admin007]' --entity-type users --entity-name admin --zookeeper localhost:2181
# add to config file
listener.name.sasl_ssl.scram-sha-512.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
   username="admin" \
   password="admin007";
© www.soinside.com 2019 - 2024. All rights reserved.