如何使用可通过 AWS Elastic Beanstalk 扩展的 Django 应用程序运行 celery Worker?

问题描述 投票:0回答:4

如何将 Django 与 AWS Elastic Beanstalk 结合使用,并且仅在主节点上通过 celery 运行任务?

django amazon-web-services celery amazon-elastic-beanstalk django-celery
4个回答
36
投票

这就是我在弹性豆茎上使用 django 设置 celery 的方法,并且可扩展性工作正常。

请记住,container_commands'leader_only'选项仅适用于应用程序的环境重建部署。如果服务运行足够长的时间,领导节点可能会被 Elastic Beanstalk 删除。 为了解决这个问题,您可能必须为领导节点应用实例保护。检查:http://docs.aws.amazon.com/autoscaling/latest/userguide/as-instance-termination.html#instance-protection-instance

为 celery Worker 添加 bash 脚本并进行配置。

添加文件root_folder/.ebextensions/files/celery_configuration.txt:

#!/usr/bin/env bash

# Get django environment variables
celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g' | sed 's/%/%%/g'`
celeryenv=${celeryenv%?}

# Create celery configuraiton script
celeryconf="[program:celeryd-worker]
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/celery worker -A django_app --loglevel=INFO

directory=/opt/python/current/app
user=nobody
numprocs=1
stdout_logfile=/var/log/celery-worker.log
stderr_logfile=/var/log/celery-worker.log
autostart=true
autorestart=true
startsecs=10

; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600

; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true

; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998

environment=$celeryenv

[program:celeryd-beat]
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/celery beat -A django_app --loglevel=INFO --workdir=/tmp -S django --pidfile /tmp/celerybeat.pid

directory=/opt/python/current/app
user=nobody
numprocs=1
stdout_logfile=/var/log/celery-beat.log
stderr_logfile=/var/log/celery-beat.log
autostart=true
autorestart=true
startsecs=10

; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600

; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true

; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998

environment=$celeryenv"

# Create the celery supervisord conf script
echo "$celeryconf" | tee /opt/python/etc/celery.conf

# Add configuration script to supervisord conf (if not there already)
if ! grep -Fxq "[include]" /opt/python/etc/supervisord.conf
  then
  echo "[include]" | tee -a /opt/python/etc/supervisord.conf
  echo "files: celery.conf" | tee -a /opt/python/etc/supervisord.conf
fi

# Reread the supervisord config
supervisorctl -c /opt/python/etc/supervisord.conf reread

# Update supervisord in cache without restarting all services
supervisorctl -c /opt/python/etc/supervisord.conf update

# Start/Restart celeryd through supervisord
supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd-beat
supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd-worker

在部署期间注意脚本执行,但仅在主节点上执行(leader_only:true)。 添加文件 root_folder/.ebextensions/02-python.config:

container_commands:
  04_celery_tasks:
    command: "cat .ebextensions/files/celery_configuration.txt > /opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh && chmod 744 /opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh"
    leader_only: true
  05_celery_tasks_run:
    command: "/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh"
    leader_only: true

文件 requirements.txt

celery==4.0.0
django_celery_beat==1.0.1
django_celery_results==1.0.1
pycurl==7.43.0 --global-option="--with-nss"

为 Amazon SQS 代理配置 celery (从列表中获取所需的端点:http://docs.aws.amazon.com/general/latest/gr/rande.htmlroot_folder/django_app/settings.py

...
CELERY_RESULT_BACKEND = 'django-db'
CELERY_BROKER_URL = 'sqs://%s:%s@' % (aws_access_key_id, aws_secret_access_key)
# Due to error on lib region N Virginia is used temporarily. please set it on Ireland "eu-west-1" after fix.
CELERY_BROKER_TRANSPORT_OPTIONS = {
    "region": "eu-west-1",
    'queue_name_prefix': 'django_app-%s-' % os.environ.get('APP_ENV', 'dev'),
    'visibility_timeout': 360,
    'polling_interval': 1
}
...

django 的 Celery 配置django_app app

添加文件root_folder/django_app/celery.py:

from __future__ import absolute_import, unicode_literals
import os
from celery import Celery

# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'django_app.settings')

app = Celery('django_app')

# Using a string here means the worker don't have to serialize
# the configuration object to child processes.
# - namespace='CELERY' means all celery-related configuration keys
#   should have a `CELERY_` prefix.
app.config_from_object('django.conf:settings', namespace='CELERY')

# Load task modules from all registered Django app configs.
app.autodiscover_tasks()

修改文件root_folder/django_app/__init__.py:

from __future__ import absolute_import, unicode_literals

# This will make sure the app is always imported when
# Django starts so that shared_task will use this app.
from django_app.celery import app as celery_app

__all__ = ['celery_app']

还检查:


6
投票

这就是我扩展@smentek的答案以允许多个工作实例和单个beat实例的方式 - 同样的事情也适用于你必须保护你的领导者的地方。 (我还没有一个自动化的解决方案)。

请注意,在应用服务器重新启动之前,celerybeat 或工作人员不会反映通过 EB cli 或 Web 界面对 EB 进行的 envvar 更新。这一次让我措手不及。

单个celery_configuration.sh文件为supervisord输出两个脚本,注意celery-beat有

autostart=false
,否则实例重启后你会得到很多beats:

# get django environment variables
celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g' | sed 's/%/%%/g'`
celeryenv=${celeryenv%?}

# create celery beat config script
celerybeatconf="[program:celeryd-beat]
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/celery beat -A lexvoco --loglevel=INFO --workdir=/tmp -S django --pidfile /tmp/celerybeat.pid

directory=/opt/python/current/app
user=nobody
numprocs=1
stdout_logfile=/var/log/celery-beat.log
stderr_logfile=/var/log/celery-beat.log
autostart=false
autorestart=true
startsecs=10

; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 10

; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true

; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998

environment=$celeryenv"

# create celery worker config script
celeryworkerconf="[program:celeryd-worker]
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/celery worker -A lexvoco --loglevel=INFO

directory=/opt/python/current/app
user=nobody
numprocs=1
stdout_logfile=/var/log/celery-worker.log
stderr_logfile=/var/log/celery-worker.log
autostart=true
autorestart=true
startsecs=10

; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600

; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true

; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=999

environment=$celeryenv"

# create files for the scripts
echo "$celerybeatconf" | tee /opt/python/etc/celerybeat.conf
echo "$celeryworkerconf" | tee /opt/python/etc/celeryworker.conf

# add configuration script to supervisord conf (if not there already)
if ! grep -Fxq "[include]" /opt/python/etc/supervisord.conf
  then
  echo "[include]" | tee -a /opt/python/etc/supervisord.conf
  echo "files: celerybeat.conf celeryworker.conf" | tee -a /opt/python/etc/supervisord.conf
fi

# reread the supervisord config
/usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf reread
# update supervisord in cache without restarting all services
/usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf update

然后在container_commands中我们只重新启动leader上的beat:

container_commands:
  # create the celery configuration file
  01_create_celery_beat_configuration_file:
    command: "cat .ebextensions/files/celery_configuration.sh > /opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh && chmod 744 /opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh && sed -i 's/\r$//' /opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh"
  # restart celery beat if leader
  02_start_celery_beat:
    command: "/usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd-beat"
    leader_only: true
  # restart celery worker
  03_start_celery_worker:
    command: "/usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd-worker"

3
投票

如果有人遵循 smentek 的答案并收到错误:

05_celery_tasks_run: /usr/bin/env bash does not exist.

要知道,如果您使用的是 Windows,您的问题可能是“celery_configuration.txt”文件在本应具有 UNIX EOL 时却具有 WINDOWS EOL。如果使用 Notepad++,请打开文件并单击“编辑 > EOL 转换 > Unix (LF)”。保存,重新部署,错误就不再存在了。

另外,对像我这样的真正业余爱好者的一些警告:

  • 请务必在 settings.py 文件的“INSTALLED_APPS”中包含“django_celery_beat”和“django_celery_results”。

  • 要检查 celery 错误,请使用“eb ssh”连接到您的实例,然后使用“tail -n 40 /var/log/celery-worker.log”和“tail -n 40 /var/log/celery-beat.log” " (其中“40”指的是您要从文件中读取的行数,从末尾开始)。

希望这对某人有帮助,这会节省我一些时间!


0
投票

正如答案中所述,接受的解决方案除了编码之外还需要大量的外部工作。现在有一个很好的库可以处理这个问题。
https://github.com/ybrs/single-beat

您安装库并使用elasticache创建一个redis服务器。
你的 procfile 可以像这样使用环境变量定位缓存服务器。

    web: gunicorn --bind :8000 --workers 3 --threads 2 appname.wsgi:application
celery_beat: SINGLE_BEAT_REDIS_SERVER=$SINGLE_BEAT_REDIS single-beat celery -A proj beat -l INFO --scheduler django_celery_beat.schedulers:DatabaseScheduler
celery_worker: celery -A proj worker -l INFO -P solo
© www.soinside.com 2019 - 2024. All rights reserved.