Scrapy和Splash不会爬行

问题描述 投票:0回答:2

我做了一个爬虫,飞溅正在工作(我在我的浏览器中测试过),scrapy虽然不能爬行和提取物品。

我的实际代码是:

# -*- coding: utf-8 -*-
import scrapy
import json
from scrapy.http.headers import Headers
from scrapy.spiders import CrawlSpider, Rule
from oddsportal.items import OddsportalItem



class OddbotSpider(CrawlSpider):
    name = "oddbot"
    allowed_domains = ["oddsportal.com"]
    start_urls = (
        'http://www.oddsportal.com/matches/tennis/',
    )

def start_requests(self):
    for url in self.start_urls:
        yield scrapy.Request(url, self.parse, meta={
            'splash': {
                'endpoint': 'render.html',
                'args': {'wait': 5.5}
            }
        })

    def parse(self, response):
        item = OddsportalItem()
        print response.body
python scrapy web-crawler splash
2个回答
0
投票

尝试导入scrap_splash并通过SplashRequest调用新请求:

from scrapy_splash import SplashRequest

yield SplashRequest(url, endpoint='render.html', args={'any':any})

0
投票

你应该修改CrawlSpider

def _requests_to_follow(self, response):
    if not isinstance(response, (HtmlResponse, SplashJsonResponse, SplashTextResponse)):
        return
    seen = set()
    for n, rule in enumerate(self._rules):
        links = [lnk for lnk in rule.link_extractor.extract_links(response)
                 if lnk not in seen]
        if links and rule.process_links:
            links = rule.process_links(links)
        for link in links:
            seen.add(link)
            r = self._build_request(n, link)
            yield rule.process_request(r)
© www.soinside.com 2019 - 2024. All rights reserved.