我目前正在尝试将 WSO2 Api 管理器与 ELK 堆栈集成,我按照 Apim Docs 中的说明进行操作,但到目前为止我还没有得到任何日志或分析。
我还将发送所有 Filebeat、Logstash、Kibana 和 Elasticsearch 配置以及 WSO2 存储库/配置。
Filebeat 配置文件:
filebeat.inputs:
- type: log
paths:
- /home/aluno/wso2am-4.2.0/repository/logs/apim_metrics.log
include_lines: ['(apimMetrics):']
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 1
setup.kibana:
output.logstash:
# The Logstash hosts
hosts: ["192.168.56.104:5044"]
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
Logstash 配置文件:
input {
beats {
port => 5044
}
}
filter {
grok {
match => ["message", "%{GREEDYDATA:UNWANTED}\ apimMetrics:%{GREEDYDATA:apimMetrics}\, %{GREEDYDATA:UNWANTED} \:%{GREEDYDATA:properties}"]
}
json {
source => "properties"
}
}
output {
if [apimMetrics] == " apim:response" {
elasticsearch {
hosts => ["http://192.168.56.104:9200"]
index => "apim_event_response"
user => "elastic"
password => "alunowso2"
}
} else if [apimMetrics] == " apim:faulty" {
elasticsearch {
hosts => ["http://192.168.56.104:9200"]
index => "apim_event_faulty"
user => "elastic"
password => "alunowso2"
}
}
}
基巴纳:
server.port: 5601
server.publicBaseUrl: "http://192.168.56.104:5601"
elasticsearch.username: "kibana_system"
elasticsearch.password: "alunowso2"
appenders:
file:
type: file
fileName: /var/log/kibana/kibana.log
layout:
type: json
root:
appenders:
- default
- file
Elasticsearch:
node.name: wso2-elastic
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
xpack.security.http.ssl:
enabled: true
keystore.path: certs/http.p12
xpack.security.transport.ssl:
enabled: true
verification_mode: certificate
keystore.path: certs/transport.p12
truststore.path: certs/transport.p12
http.host: 0.0.0.0
我已将 [apim.analytics] 配置为 true 并在 wso2 存储库/配置中使用“elk”
我期望通过此配置我可以访问 WSO2 APIM 提供的日志,但我得到 这个屏幕,并且似乎我的 apim_metrics.log 也是空的,我该怎么办?
(我省略了评论,因为 StackOverflow 指责垃圾邮件)
您正在尝试的 API Manager 版本是什么? ELK 分析从 4.2.0 开始引入 API Manager,请验证您是否使用受支持的版本。此外,分享您的 APIM /repository/conf/deployment.toml 配置片段 [apim.analytics]
如果您可以在repository/logs/apim_metrics.log 文件中看到分析日志。然后你可以通过在logstash配置文件中添加
stdout {}
来验证ELK数据流。
请参考以下范例
output {
if [apimMetrics] == " apim:response" {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "apim_event_response"
user => "elastic"
password => "xxx"
data_stream => false
}
} else if [apimMetrics] == " apim:faulty" {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "apim_event_faulty"
user => "elastic"
password => "xxx"
data_stream => false
}
}
stdout {}
}