为什么普罗米修斯服务看不到服务,但是却看到自己?

问题描述 投票:0回答:1

我有很多人。 1个节点。Prometheus服务看不到服务whoami。但是它确实看到了他自己。我也确实通过“ curl 127.0.0.1:800”从whoami服务接收数据。

我从普罗米修斯那里得到了什么:screenshot

docker-stack.yml

    version: '3.7'
    volumes:
        prometheus_data: {}
        grafana_data: {}


services:
  prom:
    image: prom/prometheus
    volumes:
      - ./prometheus/:/etc/prometheus/
      - prometheus_data:/prometheus
    command:
      - '--config.file=/etc/prometheus/prometheus.yml'
      - '--storage.tsdb.path=/prometheus'
      - '--web.console.libraries=/usr/share/prometheus/console_libraries'
      - '--web.console.templates=/usr/share/prometheus/consoles'
    ports:
      - 8080:9090
    deploy:
      replicas: 1
      update_config:
        parallelism: 2
        delay: 10s
      restart_policy:
        condition: on-failure

  grafana:
     image: grafana/grafana
     ports:
       - "3000:3000"
     deploy:
       update_config:
         parallelism: 2
         delay: 10s
       restart_policy:
         condition: on-failure
  whoami:
      image: containous/whoami
      ports:
        - "800:80"
      deploy:
        replicas: 1
        update_config:
          parallelism: 2
          delay: 10s
        restart_policy:
          condition: on-failure
  telegraf:
    image: telegraf
    ports:
      - "998:998"
    deploy:
       replicas: 1
       update_config:
         parallelism: 2
         delay: 10s
       restart_policy:
         condition: on-failure

  alertmanager:
    image: prom/alertmanager
    ports:
      - 9093:9093
    deploy:
      replicas: 1
      update_config:
        parallelism: 2
        delay: 10s
      restart_policy:
        condition: on-failure

prometheus.yml:

# my global config
global:
  scrape_interval:     15s # By default, scrape targets every 15 seconds.
  evaluation_interval: 15s # By default, scrape targets every 15 seconds.
  # scrape_timeout is set to the global default (10s).

  # Attach these labels to any time series or alerts when communicating with
  # external systems (federation, remote storage, Alertmanager).
  external_labels:
      monitor: 'my-project'

# Load and evaluate rules in this file every 'evaluation_interval' seconds.
rule_files:
  - 'alert.rules'
  # - "first.rules"
  # - "second.rules"

# alert
alerting:
  alertmanagers:
  - scheme: http
    static_configs:
    - targets:
      - "alertmanager:9093"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.

  - job_name: 'prometheus'

    # Override the global default and scrape targets from this job every 5 seconds.
    scrape_interval: 5s

    static_configs:
         - targets: ['127.0.0.1:9090']


  - job_name: 'cadvisor'

    # Override the global default and scrape targets from this job every 5 seconds.
    scrape_interval: 5s

    dns_sd_configs:
    - names:
      - 'tasks.cadvisor'
      type: 'A'
      port: 8080

#     static_configs:
#          - targets: ['cadvisor:8080']

  - job_name: 'node-exporter'

    # Override the global default and scrape targets from this job every 5 seconds.
    scrape_interval: 5s

    dns_sd_configs:
    - names:
      - 'tasks.node-exporter'
      type: 'A'
      port: 9100

#     static_configs:
#          - targets: ['node-exporter:9100']

  - job_name: 'whoami'

    # Override the global default and scrape targets from this job every 5 seconds.
    scrape_interval: 5s

    dns_sd_configs:
    - names:
      - 'tasks.whoami'
      type: 'A'
      port: 800
    static_configs:
         - targets: ['127.0.0.1:800']

OMG。该网站告诉我,我应该添加更多文本,因为我的帖子大部分是代码,并且缺少文本。不知道我还能添加什么要求...

docker prometheus docker-swarm
1个回答
0
投票
© www.soinside.com 2019 - 2024. All rights reserved.