重试请求机制

问题描述 投票:0回答:1

我试图建立网络Scraper项目我试图做的事情是使用urllib3和请求和美丽汤的智能重试机制

当我设置超时= 1以便重试失败并检查重试其中断与下面的异常代码:

import requests
import re
from bs4 import BeautifulSoup
import json
import time
import sys
from requests.adapters import HTTPAdapter
from urllib3.util import Retry

# this get_items methods is for getting dict of link to scrap items per link

def get_items(self, dict):
        itemdict = {}
        for k, v in dict.items():
            boolean = True
        # here, we fetch the content from the url, using the requests library
            while (boolean):
             try:
                a =requests.Session()
                retries = Retry(total=3, backoff_factor=0.1, status_forcelist=[301,500, 502, 503, 504])
                a.mount(('https://'), HTTPAdapter(max_retries=retries))
                page_response = a.get('https://www.XXXXXXX.il' + v, timeout=1)
             except requests.exceptions.Timeout:
                print  ("Timeout occurred")
                logging.basicConfig(level=logging.DEBUG)
             else:
                 boolean = False

            # we use the html parser to parse the url content and store it in a variable.
            page_content = BeautifulSoup(page_response.content, "html.parser")
            for i in page_content.find_all('div', attrs={'class':'prodPrice'}):
                parent = i.parent.parent.contents[0]
                getparentfunc= parent.find("a", attrs={"href": "javascript:void(0)"})
                itemid = re.search(".*'(\d+)'.*", getparentfunc.attrs['onclick']).groups()[0]
                itemName = re.sub(r'\W+', ' ', i.parent.contents[0].text)
                priceitem = re.sub(r'[\D.]+ ', ' ', i.text)
                itemdict[itemid] = [itemName, priceitem]

感谢效率重试机制解决或任何其他简单的方法谢谢Iso

python web-scraping beautifulsoup python-requests urllib3
1个回答
1
投票

我经常这样做:

def get(url, retries=3):
    try:
        r = requests.get(url)
        return r
    except ValueError as err:
        print(err)
        if retries < 1:
            raise ValueError('No more retries!')
        return get(href, retries - 1)
© www.soinside.com 2019 - 2024. All rights reserved.