流畅的日志记录驱动程序发送非结构化日志消息

问题描述 投票:0回答:1

我的环境有一个设置,其中docker容器日志被转发为流畅,然后流利地转发到splunk。

我有一个流利的问题,一些docker容器日志不是结构化格式。从文档中我看到:流畅的日志驱动程序在结构化日志消息中发送以下元数据:

container_id,container_name,source,log

我的问题是很少有日志有非结构化元数据信息:例如:日志1:

{"log":"2019/03/12 13:59:49 [info] 6#6: *2425596 client closed connection while waiting for request, client: 10.17.84.12, server: 0.0.0.0:80","container_id":"789459f8f8a52c8b4b","container_name":"testingcontainer-1ed-fwij4-EcsTaskDefinition-1TF1DH,"source":"stderr"}

记录2:

{"container_id":"26749a26500dd04e92fc","container_name":"/4C4DTHQR2V6C-EcsTaskDefinition-1908NOZPKPKY0-1","source":"stdout","log":"\u001B[0mGET \u001B[32m200 \u001B[0m0.634 ms - -\u001B[0m"}

这两个日志具有不同的元数据信息顺序(log1- [log,conatiner-name,container_id,source])(log2- [container_id,conatiner-name,source,log])。因此,我在splunk中遇到了一些问题。如何解决此问题以获得相同的元数据信息顺序?

我的流利配置文件是

<source>
  @type  forward
  @id    input1
  @label @mainstream
  @log_level trace
  port  24224
</source>

<label @mainstream>

<match *.**>
  type copy
  <store>
    @type file
    @id   output_docker1
    path         /fluentd/log/docker.*.log
    symlink_path /fluentd/log/docker.log
    append       true
    time_slice_format %Y%m%d
    time_slice_wait   1m
    time_format       %Y%m%dT%H%M%S%z
    utc
    buffer_chunk_limit 512m
  </store>
  <store>
   @type s3
   @id   output_docker2
   @log_level trace

   s3_bucket bucketwert-1
   s3_region us-east-1
   path logs/
   buffer_path /fluentd/log/docker.log
   s3_object_key_format %{path}%{time_slice}_sbx_docker_%{index}.%{file_extension}
   flush_interval 3600s
   time_slice_format %Y%m%d
   time_format       %Y%m%dT%H%M%S%z
   utc
   buffer_chunk_limit 512m
  </store>
</match>
</label>
fluentd
1个回答
0
投票

fluent-plugin-record-sort怎么样?

或者如果你知道记录中的所有键,你可以像下面这样使用built-in record_trandformer plugin

<source>
  @type dummy
  tag dummy
  dummy [
    {"log": "log1", "container_id": "123", "container_name": "name1", "source": "stderr"},
    {"container_id": "456", "container_name": "name2", "source": "stderr", "log": "log2"}
  ]
</source>

<filter dummy>
  @type record_transformer
  renew_record true
  keep_keys log,container_id,container_name,source
</filter>

<match dummy>
  @type stdout
</match>

更新(未测试):

<source>
  @type  forward
  @id    input1
  @label @mainstream
  @log_level trace
  port  24224
</source>

<label @mainstream>
<filter>
  @type record_transformer
  renew_record true
  keep_keys log,container_id,container_name,source
</filter>
<match *.**>
  @type copy
  <store>
    @type file
    @id   output_docker1
    path         /fluentd/log/docker.*.log
    symlink_path /fluentd/log/docker.log
    append       true
    time_slice_format %Y%m%d
    time_slice_wait   1m
    time_format       %Y%m%dT%H%M%S%z
    utc
    buffer_chunk_limit 512m
  </store>
  <store>
   @type s3
   @id   output_docker2
   @log_level trace

   s3_bucket bucketwert-1
   s3_region us-east-1
   path logs/
   buffer_path /fluentd/log/docker.log
   s3_object_key_format %{path}%{time_slice}_sbx_docker_%{index}.%{file_extension}
   flush_interval 3600s
   time_slice_format %Y%m%d
   time_format       %Y%m%dT%H%M%S%z
   utc
   buffer_chunk_limit 512m
  </store>
</match>
</label>
© www.soinside.com 2019 - 2024. All rights reserved.