将旧日志从filebeat重新发送到logstash

问题描述 投票:0回答:2

在此先感谢您的帮助。我想重新加载一些日志来自定义其他字段。我注意到filebeat配置中的注册表文件会跟踪已经选择的文件。但是,如果我删除该文件中的内容,我将无法恢复旧日志。我还试图在没有成功的情况下更改注册表文件中的源的时间戳。将旧日志从filebeat发送到logstash需要进行哪些更改?

我该如何取回原木?

更新:

这是tomcat容器中的最后一个日志:

2019-03-11 06:22:48 [Thread-4            ] DEBUG:   ca.bc.gov.WEB.dbpool.WEBConnectionCacheMonitor Connection cache monitor in thread: Thread-4 shutting down for pool: WEB

这是filebeat获取的日志:

2019-03-14T16:18:50.377-0700    DEBUG   [publish]       pipeline/processor.go:308       Publish event: {
  "@timestamp": "2019-03-14T23:18:45.376Z",
  "@metadata": {
    "beat": "filebeat",
    "type": "doc",
    "version": "6.6.0"
  },
  "host": {
    "name": "tomcat",
    "architecture": "x86_64",
    "os": {
      "codename": "Core",
      "platform": "centos",
      "version": "7 (Core)",
      "family": "redhat",
      "name": "CentOS Linux"
    },
    "id": "6aaed308aa5a419f880c5e45eea65414",
    "containerized": true
  },
  "source": "/app/logs/WEB/WEB-rest-api/WEB-rest-api.log",
  "log": {
    "file": {
      "path": "/app/logs/WEB/WEB-rest-api/WEB-rest-api.log"
    }
  },
  "message": "2019-03-11 06:22:48 [Thread-4            ] DEBUG:   ca.bc.gov.WEB.dbpool.WEBConnectionCacheMonitor Connection cache monitor in thread: Thread-4 shutting down for pool: WEB",
  "beat": {
    "name": "tomcat",
    "hostname": "tomcat",
    "version": "6.6.0"
  },
  "offset": 6771071,
  "prospector": {
    "type": "log"
  },
  "input": {
    "type": "log"
  },
  "meta": {
    "cloud": {
      "instance_name": "tomcat",
      "machine_type": "Standard_D8s_v3",
      "region": "CanadaCentral",
      "provider": "az",
      "instance_id": "6452bcf4-7f5d-4fc3-9f8e-5ea57f00724b"
    }
  }
}

这是Logstash的日志摄取:

[2019-03-15T10:32:25,982][DEBUG][logstash.outputs.gelf    ] Sending GELF event {:event=>{"short_message"=>["2019-03-11 06:22:48 [Thread-4            ] DEBUG:   ca.bc.gov.WEB.dbpool.WEBConnectionCacheMonitor Connection cache monitor in thread: Thread-4 shutting down for pool: WEB", " Connection cache monitor in thread: Thread-4 shutting down for pool: WEB"], "full_message"=>"2019-03-11 06:22:48 [Thread-4            ] DEBUG:   ca.bc.gov.WEB.dbpool.WEBConnectionCacheMonitor Connection cache monitor in thread: Thread-4 shutting down for pool: WEB, Connection cache monitor in thread: Thread-4 shutting down for pool: WEB", "host"=>"{\"name\":\"tomcat\",\"os\":{\"name\":\"CentOS Linux\",\"version\":\"7 (Core)\",\"codename\":\"Core\"}}", "_source"=>"/app/logs/WEB/WEB-rest-api/WEB-rest-api.log", "_class"=>"ca.bc.gov.WEB.dbpool.WEBConnectionCacheMonitor, %{JAVACLASS}", "_tags"=>"beats_input_codec_plain_applied", "_beat_hostname"=>"tomcat", "_beat_name"=>"tomcat", "_meta_cloud"=>{}, "_log_file"=>{"path"=>"/app/logs/WEB/WEB-rest-api/WEB-rest-api.log"}, "level"=>6}}

Filebeat.yml:

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

#=========================== Filebeat inputs =============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /apps/logs/WEB/web-api/web-api.log
    - /apps/logs/WEB/web-api/web-rest-api.log

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

  # Ignore files which were modified more then the defined timespan in the past
  # Time strings like 2h (2 hours), 5m (5 minutes) can be used.
  ignore_older: 0

  ### Multiline options

  # Multiline can be used for log messages spanning multiple lines. This is common
  # for Java Stack Traces or C-Line Continuation

  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'

  # Defines if the pattern set under pattern should be negated or not. Default is false.
  multiline.negate: true

  # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  multiline.match: after


#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 3
  #index.codec: best_compression
  #_source.enabled: false

#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging


#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag or the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
#setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

#============================= Elastic Cloud ==================================

# These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
  #hosts: ["log1.cgi-dev.ca:9200"]

  # Enabled ilm (beta) to use index lifecycle management instead daily indices.
  #ilm.enabled: false

  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["log1.cgi-dev.ca:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  ssl.certificate_authorities: ["/etc/pki/tls/certs/logstash.crt"]

  # Certificate for SSL client authentication
  ##ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  ##ssl.key: "/etc/pki/client/cert.key"

#================================ Processors =====================================

# Configure processors to enhance or manipulate events generated by the beat.

processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

#============================== Xpack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#xpack.monitoring.enabled: false

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well. Any setting that is not set is
# automatically inherited from the Elasticsearch output configuration, so if you
# have the Elasticsearch output configured, you can simply uncomment the
# following line.
#xpack.monitoring.elasticsearch:

但是,我没有得到日志既不是Kibana也不是Graylog。值得注意的是,在Kibana和Graylog中可以看到与同一类相关的INFO级别的相同类型的日志,而不是具有DEBUG级别的日志。

你知道会出什么问题吗?

非常感谢

logstash kibana filebeat graylog3
2个回答
0
投票

注册表保留inode和字节偏移量。删除内容不会更改inode。尝试关闭filebeat并删除/重置注册表中的字节偏移量。


0
投票
  1. 停止filebeat和logstash。
  2. 如果存在,则清除Elasticsearch中的旧数据。
  3. 删除注册表文件registryregistry.old
  4. 运行logstash。
  5. 使用命令filebeat -e -once运行filebeat。
© www.soinside.com 2019 - 2024. All rights reserved.