[我们有10台配备kafka版本-1.X的kafka机器
此kafka群集版本是HDP版本的一部分-2.6.5
我们注意到在/var/log/kafka/server.log
下有以下消息
ERROR Error while accepting connection {kafka.network.Accetpr}
java.io.IOException: Too many open files
我们也看到了
Broker 21 stopped fetcher for partition ...................... because they are in the failed log dir /kafka/kafka-logs {kafka.server.ReplicaManager}
和
WARN Received a PartitionLeaderEpoch assignment for an epoch < latestEpoch. this implies messages have arrived out of order. New: {epoch:0, offset:2227488}, Currnet: {epoch 2, offset:261} for Partition: cars-list-75 {kafka.server.epochLeaderEpocHFileCache}
关于此问题-
ERROR Error while accepting connection {kafka.network.Accetpr}
java.io.IOException: Too many open files
how to increase the MAX open files , in order to avoid this issue
更新:
在ambari中,我们从kafka中看到了以下参数->配置
这是我们应该增加的参数吗?
可以这样做:
echo "* hard nofile 100000
* soft nofile 100000" | sudo tee --append /etc/security/limits.conf
然后您应该重新启动。