从 Prometheus helm 图表向 slack 发送警报

问题描述 投票:0回答:2

我正在尝试在 Kubernetes 上的 Prometheus 中创建警报并将其发送到 Slack 通道。为此,我使用prometheus-communityhelm-charts(其中已经包含警报管理器)。由于我想使用自己的警报,我还创建了一个 values.yml (如下所示),受到 here 的强烈启发。 如果我转发 Prometheus,我可以看到我的警报从不活动状态变为待处理状态,然后触发,但没有消息发送到 slack。我非常有信心我的警报管理器配置很好(因为我已经使用另一个图表的一些预构建警报对其进行了测试,并且它们被发送到了松弛)。所以我最好的猜测是我以错误的方式添加了警报(在 serverFiles 部分),但我不知道如何正确执行它。另外,警报管理器日志对我来说看起来很正常。有谁知道我的问题来自哪里?

---
serverFiles:
  alerting_rules.yml: 
    groups:
    - name: example
      rules:
      - alert: HighRequestLatency
        expr: sum(rate(container_network_receive_bytes_total{namespace="kube-logging"}[5m]))>20000
        for: 1m
        labels:
          severity: page
        annotations:
          summary: High request latency

alertmanager:
  persistentVolume:
    storageClass: default-hdd-retain
  ## Deploy alertmanager
  ##
  enabled: true

  ## Service account for Alertmanager to use.
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
  ##
  serviceAccount:
    create: true
    name: ""

  ## Configure pod disruption budgets for Alertmanager
  ## ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/#specifying-a-poddisruptionbudget
  ## This configuration is immutable once created and will require the PDB to be deleted to be changed
  ## https://github.com/kubernetes/kubernetes/issues/45398
  ##
  podDisruptionBudget:
    enabled: false
    minAvailable: 1
    maxUnavailable: ""

  ## Alertmanager configuration directives
  ## ref: https://prometheus.io/docs/alerting/configuration/#configuration-file
  ##      https://prometheus.io/webtools/alerting/routing-tree-editor/
  ##
  config:
    global:
      resolve_timeout: 5m
      slack_api_url: "I changed this url for the stack overflow question"
    route:
      group_by: ['job']
      group_wait: 30s
      group_interval: 5m
      repeat_interval: 12h
      #receiver: 'slack'
      routes:
      - match:
          alertname: DeadMansSwitch
        receiver: 'null'
      - match:
        receiver: 'slack'
        continue: true
    receivers:
    - name: 'null'
    - name: 'slack'
      slack_configs:
      - channel: 'alerts'
        send_resolved: false
        title: '[{{ .Status | toUpper }}{{ if eq .Status "firing" }}:{{ .Alerts.Firing | len }}{{ end }}] Monitoring Event Notification'
        text: >-
          {{ range .Alerts }}
            *Alert:* {{ .Annotations.summary }} - `{{ .Labels.severity }}`
            *Description:* {{ .Annotations.description }}
            *Graph:* <{{ .GeneratorURL }}|:chart_with_upwards_trend:> *Runbook:* <{{ .Annotations.runbook }}|:spiral_note_pad:>
            *Details:*
            {{ range .Labels.SortedPairs }} • *{{ .Name }}:* `{{ .Value }}`
            {{ end }}
          {{ end }}

kubernetes prometheus-alertmanager
2个回答
4
投票

所以我终于解决了这个问题。问题显然是 kube-prometheus-stackprometheus helm 图表的工作方式有点不同。 因此,我必须在alertmanagerFiles.alertmanager.yml中插入代码(一切都从全局开始),而不是alertmanager.config。


0
投票

Helm 还使用双大引号,就像 slack/mattermost 接收器配置一样。

要解决此问题,您可以使用以下方案:

# HELM values:
value_mm:
#  title: '{{ range .Alerts }}{{ .Annotations.summary }}\n{{ end }}'
  title: '{{ template "telegram.default.message" . }}'
  text: '{{ template "slack.myorg.text" . }}'

# alertmanager-configmap.yaml (Victoria Metrics Alert)
    - name: mattermost
      slack_configs:
      - send_resolved: true
        api_url: {{ .Values.mm.url }}
        channel: "#alerts-channel"
        title: {{ .Values.value_mm.title | squote }}
        text: {{ .Values.value_mm.text | squote }}
© www.soinside.com 2019 - 2024. All rights reserved.