我有两个芹菜任务:
@shared_task
def start_scrapy(book_id, link, site, end_page=None, start_page=1):
book = Book.objects.get(id=book_id)
process = CrawlerProcess(get_project_settings())
process.crawl(MainSpider, book=book, link=link, site=site, end_page=end_page, start_page=start_page)
process.start()
@sahred_task
def find():
...
问题在于第一个任务,我在序列化器创建方法中调用第一个任务两次第一次它工作正常,但在第二次调用中我得到一个 TypeError
创建方法:
def create(self, validated_data):
book_id = validated_data.get('book_id')
site = validated_data.get('site')
feghahat_link = validated_data.get('feghahat_link')
maktabeh_link = validated_data.get('maktabeh_link')
start = validated_data.get('start')
book = Book.objects.get(id=book_id)
validated_data.update({'title': book.title, 'volume_number': book.volume_number})
task_chain = chain(
start_scrapy.s(book_id=book_id, link=feghahat_link, site='es', start_page=start),
start_scrapy.s(book_id=book_id, link=maktabeh_link, site='is', start_page=start),
find.s(book_id, site)
)
task_chain()
return validated_data
和回溯:
[2023-09-06 19:53:55,762: ERROR/ForkPoolWorker-4] Task extractor.crawler.start_scrapy[f918e091-f30c-48e3-8da5-ef4c43448559] raised unexpected: TypeError('start_scrapy() takes from 3 to 5 positional arguments but 6 were given')
Traceback (most recent call last):
File "/home/mostafa/Projects/BookExtractor/venv/lib/python3.10/site-packages/celery/app/trace.py", line 539, in trace_task
_chsig.apply_async(
File "/home/mostafa/Projects/BookExtractor/venv/lib/python3.10/site-packages/celery/canvas.py", line 400, in apply_async
return _apply(args, kwargs, **options)
File "/home/mostafa/Projects/BookExtractor/venv/lib/python3.10/site-packages/celery/app/task.py", line 559, in apply_async
check_arguments(*(args or ()), **(kwargs or {}))
TypeError: start_scrapy() takes from 3 to 5 positional arguments but 6 were given
我使用相同的参数计数第二次调用相同的函数,但错误说明了不同的内容!
感谢您提前提供的任何帮助
我认为存在识别错误,请检查它