Celery任务在任务成功后调用自己而没有celerybeat

问题描述 投票:1回答:1

我想在当前任务完成后的每30分钟偶尔调用我的芹菜任务,但由于任务从远程服务器下载文件,有时需要的时间比预期的要长。所以我不想使用celeryBeat。另外,使用自我。重试仅适用于我想错误发生的时间。这是我的任务:

@shared_task(name="download_big", bind=True, acks_late=true, autoretry_for=(Exception, requests.exceptiosn.RequestException), retry_kwargs={"max_retries": 4, "countdown": 3}):
def download_big(self):
    my_file = session.get('example.com/hello.mp4')
    if my_file.status_code == requests.codes["OK"]:
        open("hello.mp4", "wb").write(my_file.content)
    else:
        self.retry()

更新:

好吧,我改变了我的结构:

@shared_task(name="download_big", bind=True, acks_late=true, autoretry_for=(Exception, requests.exceptiosn.RequestException), retry_kwargs={"max_retries": 4, "countdown": 3}):
def download_big(url):
    my_file = session.get(url, name)
    if my_file.status_code == requests.codes["OK"]:
        open(name, "wb").write(my_file.content)
    else:
        self.retry()

@shared_task(name="download_all", bind=True, acks_late=true, autoretry_for=(Exception, requests.exceptiosn.RequestException), retry_kwargs={"max_retries": 4, "countdown": 3}):
def download_all(self):
    my_list = [...]  # bunch of urls with names
    jobs = []
    for name, url in my_list:
        jobs.append(download_big.si(url, name))
    group(jobs)()

所以在这种情况下,我必须调用download_all方法而不是download_big,这样我可以并行下载文件,当所有组任务完成后,它需要在30分钟后再次调用自己。

python django celery django-celery
1个回答
0
投票

您可以尝试使用运行一组任务的chord,当它们完成时,将运行一个可用于重新安排的回调。

EG

from celery import chord

@shared_task(name="download_big", bind=True, acks_late=true, autoretry_for=(Exception, requests.exceptiosn.RequestException), retry_kwargs={"max_retries": 4, "countdown": 3}):
def download_big(url):
    my_file = session.get(url, name)
    if my_file.status_code == requests.codes["OK"]:
        open(name, "wb").write(my_file.content)
    else:
        self.retry()

@shared_task(name="download_all", bind=True, acks_late=true, autoretry_for=(Exception, requests.exceptiosn.RequestException), retry_kwargs={"max_retries": 4, "countdown": 3}):
def download_all(self):
    my_list = [...]  # bunch of urls with names
    jobs = []
    for name, url in my_list:
        jobs.append(download_big.si(url, name))

    # Run the group and reschedule once all tasks complete
    chord(jobs)(download_all.apply_async(countdown=1800))
© www.soinside.com 2019 - 2024. All rights reserved.