什么是“查找超时”。 Celery + RabbitMQ + Docker 中出现错误,如何解决该问题?

问题描述 投票:0回答:1

我有一个快速的 api 应用程序,其中有 celery 和rabbitmq 在 docker 中运行(compose)。这是撰写配置:

services:
scraper:
    build:
      context: fastapi-scrapers
      dockerfile: Dockerfile
    cpus: "0.7"
    # ports:
    #   - '8000:8000'
    environment:
      - TIMEOUT="120"
      - WEB_CONCURRENCY=2
    networks:
      - scrape-net
    volumes:
      - ../images/:/app/images/:rw
    extra_hosts:
      - "host.docker.internal:host-gateway"

flower:
    image: mher/flower
    ports:
      - '5555:5555'
    environment:
      - CELERY_BROKER_URL=amqp://admin:pass@rabbitmq:5672/
      # - CELERY_BROKER_URL=redis://redis:6379/0
      - FLOWER_BASIC_AUTH=admin:pass
    depends_on:
      - scraper
    networks:
      - scrape-net

rabbitmq:
    image: "rabbitmq:latest"
    # ports:
      # - '5672:5672'
      # - "15672:15672"
    environment:
      - RABBITMQ_DEFAULT_USER=admin
      - RABBITMQ_DEFAULT_PASS=pass
    networks:
      - scrape-net
    extra_hosts:
      - "host.docker.internal:host-gateway"

networks:
  scrape-net:
    driver: bridge

这是 fastapi 应用程序的 Dockerfile:

FROM python:3.9

WORKDIR /code

COPY ./requirements.txt /code/requirements.txt

RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt

COPY ./app /code/app

# CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]
CMD ["bash", "-c", "celery -A app.celery.tasks worker --loglevel=info --concurrency=8 -E -P eventlet & uvicorn app.main:app --host 0.0.0.0 --port 8000"]

这是应用程序中的 celery 代码:

celery_app = Celery('tasks', broker='amqp://admin:pass@rabbitmq:5672/')

celery_app.conf.update(
    CELERY_RESULT_EXPIRES=3600,
    CELERY_AMQP_TASK_RESULT_EXPIRES=3600
)

快速 API 应用日志:

site-scraper-1  | INFO:     Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
site-scraper-1  | INFO:     Started parent process [8]
site-scraper-1  | INFO:     Started server process [11]
site-scraper-1  | INFO:     Waiting for application startup.
site-scraper-1  | INFO:     Application startup complete.
site-scraper-1  | INFO:     Started server process [10]
site-scraper-1  | INFO:     Waiting for application startup.
site-scraper-1  | INFO:     Application startup complete.
site-scraper-1  | /usr/local/lib/python3.9/site-packages/celery/platforms.py:829: SecurityWarning: You're running the worker with superuser privileges: this is
site-scraper-1  | absolutely not recommended!
site-scraper-1  | 
site-scraper-1  | Please specify a different user using the --uid option.
site-scraper-1  | 
site-scraper-1  | User information: uid=0 euid=0 gid=0 egid=0
site-scraper-1  | 
site-scraper-1  |   warnings.warn(SecurityWarning(ROOT_DISCOURAGED.format(
site-scraper-1  |  
site-scraper-1  |  -------------- celery@d4dec9482220 v5.3.6 (emerald-rush)
site-scraper-1  | --- ***** ----- 
site-scraper-1  | -- ******* ---- Linux-5.10.0-26-amd64-x86_64-with-glibc2.36 2024-03-10 22:26:52
site-scraper-1  | - *** --- * --- 
site-scraper-1  | - ** ---------- [config]
site-scraper-1  | - ** ---------- .> app:         tasks:0x7fd3a198f2e0
site-scraper-1  | - ** ---------- .> transport:   amqp://admin:**@rabbitmq:5672//
site-scraper-1  | - ** ---------- .> results:     disabled://
site-scraper-1  | - *** --- * --- .> concurrency: 8 (eventlet)
site-scraper-1  | -- ******* ---- .> task events: ON
site-scraper-1  | --- ***** ----- 
site-scraper-1  |  -------------- [queues]
site-scraper-1  |                 .> celery           exchange=celery(direct) key=celery
site-scraper-1  |                 
site-scraper-1  | 
site-scraper-1  | [tasks]
site-scraper-1  |   . app.celery.tasks.start_scrape
site-scraper-1  | 
site-scraper-1  | [2024-03-10 22:26:52,909: WARNING/MainProcess] /usr/local/lib/python3.9/site-packages/celery/worker/consumer/consumer.py:507: CPendingDeprecationWarning: The broker_connection_retry configuration setting will no longer determine
site-scraper-1  | whether broker connection retries are made during startup in Celery 6.0 and above.
site-scraper-1  | If you wish to retain the existing behavior for retrying connections on startup,
site-scraper-1  | you should set broker_connection_retry_on_startup to True.
site-scraper-1  |   warnings.warn(
site-scraper-1  | 
site-scraper-1  | [2024-03-10 22:27:13,338: ERROR/MainProcess] consumer: Cannot connect to amqp://admin:**@rabbitmq:5672//: [Errno -3] Lookup timed out.
site-scraper-1  | Trying again in 2.00 seconds... (1/100)
site-scraper-1  | 
site-scraper-1  | [2024-03-10 22:27:35,769: ERROR/MainProcess] consumer: Cannot connect to amqp://admin:**@rabbitmq:5672//: [Errno -3] Lookup timed out.
site-scraper-1  | Trying again in 4.00 seconds... (2/100)

这几个月来一直工作得很好,现在在快速 api 应用程序中抛出这个错误。 rabbitmq 日志似乎没问题,用户已创建,并且rabbitmq 实例已按预期启动。 Flower 容器能够毫无问题地连接到rabbitmq 容器。只是快速 api 容器有问题。

我尝试在 amqp 中使用 localhost 而不是rabbitmq,但这没有用。它也不适用于 docker.host.internal。我尝试将 python 应用程序的依赖项更新到最新版本,但这也没有改变任何内容。我尝试过使用 Redis 作为消息代理,但它会抛出相同的超时错误,所以我认为这与我的 python 应用程序有关。

python docker rabbitmq celery
1个回答
0
投票

后来它在 python docker 的更高版本中得到了修复。所以我更新了:

FROM python:3.9

FROM python:latest

现在又可以正常工作了。

© www.soinside.com 2019 - 2024. All rights reserved.