在python中,为什么await语句后的那一行没有被执行?

问题描述 投票:0回答:0

我有一段代码可以使用 asynio 从 commoncrawl 下载数据。但是,我发现我的程序在我的控制台中只输出“rest”。它没有打印“输入”或输出到我的控制台。 (请参阅我的函数 download_url())。整个过程如下图所示。只是不知道为什么“print('in')”语句没有执行,程序就正常退出了

async def download_url(few_urls, tmp_folder, id, process_id):
    html_contents = []
    url = few_urls[0]
    leng, offset, website_url = url[0], url[1], url[2]
    start = int(offset)
    end = int(offset)+int(leng)-1
    header = {'Range': f'bytes={start}-{end}'}
    print(header)
    await asyncio.sleep(0.1)
    print('rest')
    async with aiohttp.ClientSession() as session:

        async with session.get(website_url, headers=header) as response:
            print('in')
            output = await response.text()
            output = output.decode()
            print(output)
    return 

async def download_urls(urls, tmp_folder, process_id):
    url_blocks = []
    block_num = 10
    start = 0
    while start+block_num <= len(urls):
        url_blocks.append(urls[start:start+block_num])
        start += block_num
    if start < len(urls): url_blocks.append(urls[start:])

    tasks = []
    for i, url_block in enumerate(url_blocks):
        tasks.append(asyncio.create_task(download_url(url_block, tmp_folder, i, process_id)))
    await asyncio.gather(*tasks)

def main(urls, tmp_folder, process_id):
    print(f'in main {process_id}')
    asyncio.run(download_urls(urls, tmp_folder, process_id))

我希望一些熟悉 python asyncio 的人回答我的问题。谢谢!对了,python版本是3.8.

python asynchronous python-asyncio aiohttp
© www.soinside.com 2019 - 2024. All rights reserved.