Fluentd 中的日志解析

问题描述 投票:0回答:1

我部署 Kubernetese 项目,使用 EFK 堆栈进行日志管理。

这是 Kibana 中显示的当前日志。现在我希望这个日志字符串被“分解”成新的标签。在这种情况下:

fluidd版本:v1.14.6

unmatched_line:
    2023-07-20T11:25:32.562071918+03:00 stdout F [2m2023-07-20 08:25:32.561[0;39m [32mTRACE [Authentication,3a0006d1c090f94033e572d60b0fa04b,234fdcce552ef6dd][0;39m [32msamuel[0;39m [35m1[0;39m [2m---[0;39m [2m[io-8080-exec-24][0;39m [36mo.h.t.d.s.BasicBinder [0;39m [2m:[0;39m binding parameter [2] as [VARCHAR] - [test1]
@timestamp:
    Jul 20, 2023 @ 11:25:33.251
docker.container_id:
    2580d9b5491a2d651de6c25990c55a9aac261151e91621d9773c7be8061199c6
docker.container_id.keyword:
    2580d9b5491a2d651de6c25990c55a9aac261151e91621d9773c7be8061199c6
kubernetes.container_image:
    ip:80/authentication:0.0.1
kubernetes.container_image_id:
 

如何在 Fluentd(elasticsearch 或 kibana,如果在 Fluentd 中不可能)中解析我的日志以创建新标签,以便我可以对它们进行排序并更轻松地导航。

我在服务器上打印的日志始终以日期、端点、traceid、spanid 和日期开头。我的示例日志:

2023-07-20 06:37:16.050  INFO [Authentication,ac15952cf4392edfdbe96fc1d4aa0d77,1088f5bb191c5cf0] samuel  1 --- [io-8080-exec-24] c.a.f.c.i.p.HeaderValidatorInterceptor   : HeaderValidatorInterceptor is running : HeaderValidatorInterceptor
2023-07-20 06:37:16.065 TRACE [Authentication,ac15952cf4392edfdbe96fc1d4aa0d77,0c860113d8542eb6] samuel  1 --- [io-8080-exec-24] o.h.t.d.s.BasicBinder                    : binding parameter [1] as [BIGINT] - [1]
2023-07-20 06:37:16.173  INFO [Authentication,ac15952cf4392edfdbe96fc1d4aa0d77,1088f5bb191c5cf0]   1 --- [io-8080-exec-24] c.a.f.s.f.FrameworkRequestContextFilter  : REQUEST DURATION : guid=[ops] traceid=[ac15952cf4392edfdbe96fc1d4aa0d77] spanid=[1088f5bb191c5cf0] method=[POST] path=[/authentication/getToken] status=[400] duration=[128 ms]

到 我想像这样解析和过滤日志;

date: 2023-07-20
time: 06:37:16.173
logType: INFO, ERROR, TRACE, DEBUG
endpoint: Authentication(Example)
traceid: ac15952cf4392edfdbe96fc1d4aa0d77
spanid: 1088f5bb191c5cf0
username: samuel
other: 1 --- [io-8080-exec-24] o.h.t.d.s.BasicBinder                    : binding parameter [1] as [BIGINT] - [1]

我的流利会议

data:
  01_sources.conf: |-
    ## logs from podman
    <source>
      @type tail
      @id in_tail_container_logs
      @label @KUBERNETES
      path /var/log/containers/*.log
      pos_file /var/log/fluentd-containers.log.pos
      tag kubernetes.*
      read_from_head true
      <parse>
        @type multi_format
        <pattern>
          @type regexp
          expression '^(?<timestamp>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}\.\d{3})\s+(?<log_level>\w+)\s+\[(?<endpoint>[^,]+),(?<traceid>[^,]+)(?:,(?<spanid>[^,]+))?\]\s+(?<username>[^\s]+)\s+(?<message>.*)$'
          time_key timestamp
          time_format %Y-%m-%d %H:%M:%S.%L
        </pattern>
      </parse>
      emit_unmatched_lines true
    </source>
  02_filters.conf: |-
    <label @KUBERNETES>
      <match kubernetes.var.log.containers.fluentd**>
        @type relabel
        @label @FLUENT_LOG
      </match>
      <match kubernetes.var.log.containers.**kube-system**.log>
        @type null
      </match>
      <filter kubernetes.**>
        @type kubernetes_metadata
        @id filter_kube_metadata
skip_labels false
        skip_container_metadata false
        skip_namespace_metadata true
        skip_master_url true
      </filter>
      <filter kubernetes.var.log.containers.**>
        @type record_transformer
        <record>
          date ${timestamp}
          level ${log_level}
          endpoint ${endpoint}
          traceid ${traceid}
          spanid ${spanid}
          username ${username}
          message ${message}
        </record>
      </filter>
      <filter **>
        @type elasticsearch_genid
        hash_id_key _hash
      </filter>
      <match **>
        @type relabel
        @label @DISPATCH
      </match>
    </label>
  03_dispatch.conf: |-
    <label @DISPATCH>
      <filter **>
        @type prometheus
        <metric>
          name fluentd_input_status_num_records_total
          type counter
desc The total number of incoming records
          <labels>
            tag ${tag}
            hostname ${hostname}
          </labels>
        </metric>
      </filter>

      <match **>
        @type relabel
        @label @OUTPUT
      </match>
    </label>
  04_outputs.conf: |-
    <label @OUTPUT>
      <match **>
        @type elasticsearch
        host "elasticsearch-master"
        port 9200
        path ""
        user elastic
        password changeme
        id_key _hash
        remove_keys _hash
        index_name fluentd-${time.strftime('%Y.%m.%d')}
        logstash_format true
        logstash_prefix fluentd
        logstash_dateformat %Y%m%d
        <buffer>
          flush_mode interval
          flush_interval 5s
          flush_thread_count 8
          flush_thread_interval 1s
        </buffer>
      </match>
    </label>
kind: ConfigMap
elasticsearch logging kibana fluentd efk
1个回答
0
投票

如果我理解正确的话,你想使用的是 rewrite-tag-filter

https://docs. Fluentd.org/output/rewrite_tag_filter

© www.soinside.com 2019 - 2024. All rights reserved.