Filebeat Kubernetes处理器和过滤

问题描述 投票:1回答:3

我正在尝试使用Filebeat将K8s pod日志发送到Elasticsearch。

我在这里在线遵循指南:https://www.elastic.co/guide/en/beats/filebeat/6.0/running-on-kubernetes.html

一切正常,但是我想从系统Pod中过滤掉事件。我更新的配置如下:

apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-prospectors
  namespace: kube-system
  labels:
    k8s-app: filebeat
    kubernetes.io/cluster-service: "true"
data:
  kubernetes.yml: |-
    - type: log
      paths:
        - /var/lib/docker/containers/*/*.log
  multiline.pattern: '^\s'
  multiline.match: after
  json.message_key: log
  json.keys_under_root: true
  processors:
    - add_kubernetes_metadata:
        in_cluster: true
        namespace: ${POD_NAMESPACE}
    - drop_event.when.regexp:
        or:
          kubernetes.pod.name: "weave-net.*"
          kubernetes.pod.name: "external-dns.*"
          kubernetes.pod.name: "nginx-ingress-controller.*"
          kubernetes.pod.name: "filebeat.*"

我试图通过以下方式忽略weave-netexternal-dnsingress-controllerfilebeat事件:

- drop_event.when.regexp:
    or:
      kubernetes.pod.name: "weave-net.*"
      kubernetes.pod.name: "external-dns.*"
      kubernetes.pod.name: "nginx-ingress-controller.*"
      kubernetes.pod.name: "filebeat.*"

但是他们继续到达Elasticsearch。

elasticsearch logging kubernetes kibana filebeat
3个回答
4
投票

条件必须为列表:

- drop_event.when.regexp:
    or:
      - kubernetes.pod.name: "weave-net.*"
      - kubernetes.pod.name: "external-dns.*"
      - kubernetes.pod.name: "nginx-ingress-controller.*"
      - kubernetes.pod.name: "filebeat.*"

我不确定您的参数顺序是否有效。我的一个工作示例如下所示:

- drop_event:
    when:
      or:
        # Exclude traces from Zipkin
        - contains.path: "/api/v"
        # Exclude Jolokia calls
        - contains.path: "/jolokia/?"
        # Exclude pinging metrics
        - equals.path: "/metrics"
        # Exclude pinging health
        - equals.path: "/health"

4
投票

这在filebeat 6.1.3中对我有用

        - drop_event.when:
            or:
            - equals:
                kubernetes.container.name: "filebeat"
            - equals:
                kubernetes.container.name: "prometheus-kube-state-metrics"
            - equals:
                kubernetes.container.name: "weave-npc"
            - equals:
                kubernetes.container.name: "nginx-ingress-controller"
            - equals:
                kubernetes.container.name: "weave"

3
投票

我使用的是另一种方法,就日志记录管道中传输的日志数量而言,效率较低。

类似地,您使用守护程序在节点上部署了一个文件拍子实例。这里没什么特别的,这是我正在使用的配置:

apiVersion: v1
data:
  filebeat.yml: |-
    filebeat.config:
      prospectors:
        # Mounted `filebeat-prospectors` configmap:
        path: ${path.config}/prospectors.d/*.yml
        # Reload prospectors configs as they change:
        reload.enabled: false
      modules:
        path: ${path.config}/modules.d/*.yml
        # Reload module configs as they change:
        reload.enabled: false

    processors:
      - add_cloud_metadata:

    output.logstash:
      hosts: ['logstash.elk.svc.cluster.local:5044']
kind: ConfigMap
metadata:
  labels:
    k8s-app: filebeat
    kubernetes.io/cluster-service: "true"
  name: filebeat-config

还有这个给探矿者:

apiVersion: v1
data:
  kubernetes.yml: |-
    - type: log
      paths:
        - /var/lib/docker/containers/*/*.log
      json.message_key: log
      json.keys_under_root: true
      processors:
        - add_kubernetes_metadata:
            in_cluster: true
            namespace: ${POD_NAMESPACE}
kind: ConfigMap
metadata:
  labels:
    k8s-app: filebeat
    kubernetes.io/cluster-service: "true"
  name: filebeat-prospectors

Daemonset规范:

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  labels:
    k8s-app: filebeat
    kubernetes.io/cluster-service: "true"
  name: filebeat
spec:
  selector:
    matchLabels:
      k8s-app: filebeat
      kubernetes.io/cluster-service: "true"
  template:
    metadata:
      labels:
        k8s-app: filebeat
        kubernetes.io/cluster-service: "true"
    spec:
      containers:
      - args:
        - -c
        - /etc/filebeat.yml
        - -e
        command:
        - /usr/share/filebeat/filebeat
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        image: docker.elastic.co/beats/filebeat:6.0.1
        imagePullPolicy: IfNotPresent
        name: filebeat
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        securityContext:
          runAsUser: 0
        volumeMounts:
        - mountPath: /etc/filebeat.yml
          name: config
          readOnly: true
          subPath: filebeat.yml
        - mountPath: /usr/share/filebeat/prospectors.d
          name: prospectors
          readOnly: true
        - mountPath: /usr/share/filebeat/data
          name: data
        - mountPath: /var/lib/docker/containers
          name: varlibdockercontainers
          readOnly: true
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      volumes:
      - configMap:
          name: filebeat-config
        name: config
      - hostPath:
          path: /var/lib/docker/containers
          type: ""
        name: varlibdockercontainers
      - configMap:
          defaultMode: 384
          name: filebeat-prospectors
        name: prospectors
      - emptyDir: {}
        name: data

[基本上,来自所有容器的所有日志中的所有数据都被转发到logstash,可在服务端点到达:logstash.elk.svc.cluster.local:5044(服务在“ elk”命名空间中称为“ logstash”)。

为简便起见,我只给您Logstash的配置(如果您需要有关kubernetes的更多具体帮助,请在评论中提问:]

logstash.yml文件非常基本:

http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline

仅指示我安装管道配置文件的目录的安装点,这些文件是:

10-beats.conf:声明filebeat的输入(端口5044必须使用名为“ logstash”的服务公开)

input {
  beats {
    port => 5044
    ssl => false
  }
}

49-过滤logs.conf:该过滤器基本上会删除来自没有“ elk”标签的容器的日志。对于确实带有“麋鹿”标签的豆荚,它将保留来自以豆荚“麋鹿”标签命名的容器的日志。例如,如果Pod有两个容器,分别称为“ nginx”和“ python”,则将标签“ elk”的值设置为“ nginx”将仅保留来自nginx容器的日志,并丢弃python的日志。日志的类型设置为Pod在其中运行的名称空间。这可能并不适合每个人(对于属于命名空间的所有日志,elasticsearch中将只有一个索引),但是它对我有用,因为我的日志是同源的。

filter {
    if ![kubernetes][labels][elk] {
        drop {}
    }
    if [kubernetes][labels][elk] {
        # check if kubernetes.labels.elk contains this container name
        mutate {
          split => { "kubernetes[labels][elk]" => "." }
        }
        if [kubernetes][container][name] not in [kubernetes][labels][elk] {
          drop {}
        }
        mutate {
          replace => { "@metadata[type]" => "%{kubernetes[namespace]}" }
          remove_field => [ "beat", "host", "kubernetes[labels][elk]", "kubernetes[labels][pod-template-hash]", "kubernetes[namespace]", "kubernetes[pod][name]", "offset", "prospector[type]", "source", "stream", "time" ]
          rename => { "kubernetes[container][name]" => "container"  }
          rename => { "kubernetes[labels][app]" => "app"  }
        }
    }
}

其余配置与日志解析有关,在此情况下无关。唯一重要的部分是输出:

99-output.conf:发送数据到elasticsearch:

output {
  elasticsearch {
    hosts => ["http://elasticsearch.elk.svc.cluster.local:9200"]
    manage_template => false
    index => "%{[@metadata][type]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}

希望您在这里明白了。

此方法的专业人士

  • 一旦部署了filebeat和logstash,只要不需要解析新的日志类型,就无需更新filebeat或logstash配置即可获取kibana中的新日志。您只需要在窗格模板中添加标签。
  • 默认情况下,只要您未明确放置标签,所有日志文件都会被删除。

此方法的缺点

  • 来自所有pod的所有日志都通过filebeat和logstash传递,并且仅在logstash中被丢弃。对于logstash,这是很多工作,并且可能会消耗资源,具体取决于集群中具有的pod的数量。

我肯定有解决此问题的更好方法,但我认为此解决方案非常方便,至少对于我的用例而言。

© www.soinside.com 2019 - 2024. All rights reserved.