如何在通过gunicorn运行django的docker容器中运行django-apscheduler

问题描述 投票:0回答:1

是否可以在通过gunicorn运行django的docker容器内运行django-apscheduler?目前我遇到的问题是入口点脚本中的自定义manage.py命令永远运行,因此gunicorn永远不会被执行。

我的入口点脚本:

#!/bin/sh
python manage.py runapscheduler --settings=core.settings_dev_docker

我的runapscheduler.py

# runapscheduler.py
import logging

from django.conf import settings

from apscheduler.schedulers.blocking import BlockingScheduler
from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.triggers.cron import CronTrigger
from django.core.management.base import BaseCommand
from django_apscheduler.jobstores import DjangoJobStore
from django_apscheduler.models import DjangoJobExecution
from django_apscheduler import util

from backend.scheduler.scheduler import scheduler

logger = logging.getLogger("backend")


def my_job():
    logger.error("Hello World!")
    # Your job processing logic here...
    pass


# The `close_old_connections` decorator ensures that database connections, that have become
# unusable or are obsolete, are closed before and after your job has run. You should use it
# to wrap any jobs that you schedule that access the Django database in any way.
@util.close_old_connections
# TODO: Change max_age to keep old jobs longer
def delete_old_job_executions(max_age=604_800):
    """
    This job deletes APScheduler job execution entries older than `max_age` from the database.
    It helps to prevent the database from filling up with old historical records that are no
    longer useful.

    :param max_age: The maximum length of time to retain historical job execution records.
                    Defaults to 7 days.
    """
    DjangoJobExecution.objects.delete_old_job_executions(max_age)


class Command(BaseCommand):
    help = "Runs APScheduler."

    def handle(self, *args, **options):
        # scheduler = BlockingScheduler(timezone=settings.TIME_ZONE)
        # scheduler.add_jobstore(DjangoJobStore(), "default")

        scheduler.add_job(
            my_job,
            trigger=CronTrigger(minute="*/1"),  # Every 10 seconds
            id="my_job",  # The `id` assigned to each job MUST be unique
            max_instances=1,
            replace_existing=True,
        )
        logger.error("Added job 'my_job'.")

        scheduler.add_job(
            delete_old_job_executions,
            trigger=CronTrigger(
                day_of_week="mon", hour="00", minute="00"
            ),  # Midnight on Monday, before start of the next work week.
            id="delete_old_job_executions",
            max_instances=1,
            replace_existing=True,
        )
        logger.error(
            "Added weekly job: 'delete_old_job_executions'."
        )

        try:
            logger.error("Starting scheduler...")
            scheduler.start()
        except KeyboardInterrupt:
            logger.error("Stopping scheduler...")
            scheduler.shutdown()
            logger.error("Scheduler shut down successfully!")

我的docker容器中的命令如下:

command: gunicorn core.wsgi:application --bind 0.0.0.0:8000

如何正确运行runapscheduler,以便gunicorn也运行?我是否必须为 runapscheduler 创建一个单独的进程?

python django gunicorn apscheduler
1个回答
0
投票

我遇到了这个并让它发挥作用。我使用

docker-compose
来启动该过程,但这不相关:

version: "3.9"

services:
  app:
    container_name: django
    build: .
    command: >
      bash -c "pipenv run python manage.py makemigrations
      && pipenv run python manage.py migrate
      && pipenv run python manage.py runserver 0.0.0.0:8000
      & pipenv run python manage.py startscheduler"

    volumes:
      - ./xy:/app
    ports:
      - 8000:8000
    environment:
        - HOST=db
    depends_on:
      db:
        condition: service_healthy

重要的部分是我们在哪里供应

command

  • 如果您使用
    &&
    链接命令,我的倒数第二个命令将不会退出,因此下一个命令也不会启动
  • 如果使用
    &
    链接它们,两者将并行运行

额外提示:如果您在

settings.py
中配置了日志记录(不依赖于
print
),您可以将管理命令的日志输出到
runserver
的日志流中。

© www.soinside.com 2019 - 2024. All rights reserved.