setup_defaults 中忽略了 Celery 池参数?

问题描述 投票:0回答:1

我有一个像这样运行 Celery Worker 的脚本:

if __name__ == '__main__':
        worker = celery.Worker()
        worker.setup_defaults(
            loglevel=logging.INFO,
            pool='eventlet',
            concurrency=500
        )
        worker.start()

这将启动 Celery,因为输出是:

 -------------- [email protected] v5.2.7 (dawn-chorus)
--- ***** ----- 
-- ******* ---- Linux-5.10.0-19-cloud-amd64-x86_64-with-glibc2.31 2022-12-14 15:23:55
- *** --- * --- 
- ** ---------- [config]
- ** ---------- .> app:         __main__:0x7fdda296baf0
- ** ---------- .> transport:   redis://localhost:6379/6
- ** ---------- .> results:     redis://localhost:6379/6
- *** --- * --- .> concurrency: 500 (eventlet)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** ----- 
 -------------- [queues]
                .> celery           exchange=celery(direct) key=celery
                

[tasks]
  . task1
  . task2
  . celery.accumulate
  . celery.backend_cleanup
  . celery.chain
  . celery.chord
  . celery.chord_unlock
  . celery.chunks
  . celery.group
  . celery.map
  . celery.starmap

但是,不知何故,进程以 Fork 的形式运行:

[2022-12-14 15:08:00,623: WARNING/ForkPoolWorker-2] - Some print command
[2022-12-14 15:08:00,623: WARNING/ForkPoolWorker-1] - Some print command

所以我想也许并发关闭了,所以我尝试使用 gevent。是一样的

所以我尝试了其他方法,我用随机文本替换了“eventlet”; “helloworld”,这是输出:


 -------------- [email protected] v5.2.7 (dawn-chorus)
--- ***** ----- 
-- ******* ---- Linux-5.10.0-19-cloud-amd64-x86_64-with-glibc2.31 2022-12-14 15:23:55
- *** --- * --- 
- ** ---------- [config]
- ** ---------- .> app:         __main__:0x7fdda296baf0
- ** ---------- .> transport:   redis://localhost:6379/6
- ** ---------- .> results:     redis://localhost:6379/6
- *** --- * --- .> concurrency: 500 (helloworld)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** ----- 
 -------------- [queues]
                .> celery           exchange=celery(direct) key=celery
                

[tasks]
  . task1
  . task2
  . celery.accumulate
  . celery.backend_cleanup
  . celery.chain
  . celery.chord
  . celery.chord_unlock
  . celery.chunks
  . celery.group
  . celery.map
  . celery.starmap

我的意思是,什么?

如果池不正确,芹菜应该会失败,但在这里,什么也不会发生。

更奇怪的是,它之前工作正常,昨天就停止了,我这边根本没有任何改变。

最近是否有一些更新影响池的定义?

python celery pool
1个回答
0
投票

好吧,原因是因为不知何故,调用

setup_defaults
被忽略了。

正确的做法是:

if __name__ == '__main__':
        worker = celery.Worker(
            loglevel=logging.INFO,
            pool='eventlet',
            concurrency=500
        )
        worker.start()

这将认真考虑池(并将其设置为哑值将失败)

© www.soinside.com 2019 - 2024. All rights reserved.