我具有以下基础结构:
ELK作为docker容器安装,每个都在自己的容器中。在运行CentOS的虚拟机上,我安装了nginx Web服务器和Filebeat来收集日志。我在filebeat中启用了nginx模块。
> filebeat modules enable nginx
开始文件拍之前,我用elasticsearch设置了它并将其仪表板安装在kibana上。
配置文件(我已从文件中删除了不必要的注释):
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.kibana:
host: "172.17.0.1:5601"
output.elasticsearch:
hosts: ["172.17.0.1:9200"]
然后在elasticsearch和kibana中进行设置
> filebeat setup -e --dashboards
这很好。实际上,如果我保持这种方式,一切都将完美运行。我可以在kibana中使用收集的日志,并使用通过上述命令安装的NGinX的仪表板。
我想将日志传递到Logstash。这是我的Logstash配置,它使用以下管道:
- pipeline.id: filebeat path.config: "config/filebeat.conf"
filebeat.conf:
input { beats { port => 5044 } } #filter { # mutate { # add_tag => ["filebeat"] # } #} output { elasticsearch { hosts => ["elasticsearch0:9200"] index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}" } stdout { } }
使日志通过Logstash生成的日志就是:
{ "offset" => 6655, "@version" => "1", "@timestamp" => 2019-02-20T13:34:06.886Z, "message" => "10.0.2.2 - - [20/Feb/2019:08:33:58 -0500] \"GET / HTTP/1.1\" 304 0 \"-\" \"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/71.0.3578.98 Chrome/71.0.3578.98 Safari/537.36\" \"-\"", "beat" => { "version" => "6.5.4", "name" => "localhost.localdomain", "hostname" => "localhost.localdomain" }, "source" => "/var/log/nginx/access.log", "host" => { "os" => { "version" => "7 (Core)", "codename" => "Core", "family" => "redhat", "platform" => "centos" }, "name" => "localhost.localdomain", "id" => "18e7cb2506624fb6ae2dc3891d5d7172", "containerized" => true, "architecture" => "x86_64" }, "fileset" => { "name" => "access", "module" => "nginx" }, "tags" => [ [0] "beats_input_codec_plain_applied" ], "input" => { "type" => "log" }, "prospector" => { "type" => "log" } }
我的对象缺少很多字段。应该有更多的结构化信息
更新:这是我所期望的
{ "_index": "filebeat-6.5.4-2019.02.20", "_type": "doc", "_id": "ssJPC2kBLsya0HU-3uwW", "_version": 1, "_score": null, "_source": { "offset": 9639, "nginx": { "access": { "referrer": "-", "response_code": "404", "remote_ip": "10.0.2.2", "method": "GET", "user_name": "-", "http_version": "1.1", "body_sent": { "bytes": "3650" }, "remote_ip_list": [ "10.0.2.2" ], "url": "/access", "user_agent": { "patch": "3578", "original": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/71.0.3578.98 Chrome/71.0.3578.98 Safari/537.36", "major": "71", "minor": "0", "os": "Ubuntu", "name": "Chromium", "os_name": "Ubuntu", "device": "Other" } } }, "prospector": { "type": "log" }, "read_timestamp": "2019-02-20T14:29:36.393Z", "source": "/var/log/nginx/access.log", "fileset": { "module": "nginx", "name": "access" }, "input": { "type": "log" }, "@timestamp": "2019-02-20T14:29:32.000Z", "host": { "os": { "codename": "Core", "family": "redhat", "version": "7 (Core)", "platform": "centos" }, "containerized": true, "name": "localhost.localdomain", "id": "18e7cb2506624fb6ae2dc3891d5d7172", "architecture": "x86_64" }, "beat": { "hostname": "localhost.localdomain", "name": "localhost.localdomain", "version": "6.5.4" } }, "fields": { "@timestamp": [ "2019-02-20T14:29:32.000Z" ] }, "sort": [ 1550672972000 ] }
我具有以下基础结构:ELK作为docker容器安装,每个容器都在自己的容器中。在运行CentOS的虚拟机上,我安装了nginx Web服务器和Filebeat来收集...
从您的logstash配置来看,您似乎在解析日志消息。
@ baudsp提供的答案基本上是正确的,但还不完整。我遇到了完全相同的问题,并且我也有完全相同的filter mentioned in the documentation(和@baudsp的答案),但是Elastic Search中的文档仍然不包含任何预期的字段。