警报在 Prometheus 上触发,但在 Alertmanager 上不触发

问题描述 投票:0回答:1

我似乎无法找出为什么 Alertmanager 没有收到来自 Prometheus 的警报。对于这一挑战,我将不胜感激。我对使用 Prometheus 和 Alertmanager 相当陌生。我正在使用 MsTeams 的 Webhook 来推送来自 Alertmanager 的通知。

Alertmanager.yml

global:
  resolve_timeout: 5m


route:
  group_by: ['critical','severity']
  group_wait: 10s
  group_interval: 10s
  repeat_interval: 1h
  receiver: 'alert_channel'


receivers:
- name: 'alert_channel'
  webhook_configs:
  - url: 'http://localhost:2000/alert_channel'
    send_resolved: true

prometheus.yml - (只是其中的一部分)

# my global config
global:
  scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

# Alertmanager configuration
alerting:
  alertmanagers:
  - static_configs:
    - targets:
      - localhost:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  # - "first_rules.yml"
  # - "second_rules.yml"
  - alert_rules.yml

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'kafka'

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'

    static_configs:
    - targets: ['localhost:8080']
      labels:
        service: 'Kafka'

alertmanager.service

[Unit]
Description=Prometheus Alert Manager
Wants=network-online.target
After=network-online.target

[Service]
Type=simple
User=alertmanager
Group=alertmanager
ExecStart=/usr/local/bin/alertmanager \
  --config.file=/etc/alertmanager/alertmanager.yml \
  --storage.path=/data/alertmanager \
  --web.listen-address=127.0.0.1:9093

Restart=always

[Install]
WantedBy=multi-user.target

警报规则

groups:
- name: alert_rules
  rules:
  - alert: ServiceDown
    expr: up == 0
    for: 1m
    labels:
      severity: "critical"
    annotations:
      summary: "Service {{ $labels.service }} down!"
      description: "{{ $labels.service }} of job {{ $labels.job }} has been down for more than 1 minute."


  - alert: HostOutOfMemory
    expr: node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes * 100 < 25
    for: 5m
    labels:
      severity: warning
    annotations:
      summary: "Host out of memory (instance {{ $labels.instance }})"
      description: "Node memory is filling up (< 25% left)\n  VALUE = {{ $value }}\n  LABELS: {{ $labels }}"


  - alert: HostOutOfDiskSpace
    expr: (node_filesystem_avail_bytes{mountpoint="/"}  * 100) / node_filesystem_size_bytes{mountpoint="/"} < 40
    for: 1s
    labels:
      severity: warning
    annotations:
      summary: "Host out of disk space (instance {{ $labels.instance }})"
      description: "Disk is almost full (< 40% left)\n  VALUE = {{ $value }}\n  LABELS: {{ $labels }}"

普罗米修斯警报

但是我在alertmanager上没有看到这些警报

此时我已经没有主意了。请我需要帮助。我从上周就开始关注这个问题了。

monitoring prometheus microsoft-teams prometheus-alertmanager
1个回答
0
投票

您的 Alertmanager 配置有误。

group_by
需要一组标签名称,从我所看到的
critical
是一个标签值,而不是名称。因此,只需删除
critical
,您就可以开始了。

另请查看此博客文章,非常有帮助https://www.robustperception.io/whats-the-difference- Between-group_interval-group_wait-and-repeat_interval


编辑1

如果您希望接收器

alert_channel
仅接收严重性为
critical
的警报,您必须创建一条路线并具有
match
属性。

大致如下:

route:
  group_by: ['...']  # good if very low volum
  group_wait: 15s
  group_interval: 5m
  repeat_interval: 1h
  routes:
    - match:
        - severity: critical
      receiver: alert_channel

编辑2

如果这不起作用,请尝试以下方法:

route:
  group_by: ['...']
  group_wait: 15s
  group_interval: 5m
  repeat_interval: 1h
  receiver: alert_channel

这应该有效。检查您的 Prometheus 日志,看看是否在那里找到提示

© www.soinside.com 2019 - 2024. All rights reserved.