Scrapy。抓取整个网站并返回单个值:链接总数

问题描述 投票:0回答:1

抓取整个网站很容易

# -*- coding: utf-8 -*-
import scrapy
from scrapy.linkextractors import LinkExtractor

class MySpider(scrapy.Spider):
    name = 'myspider'
    allowed_domains = ['quotes.toscrape.com']
    start_urls = ['http://quotes.toscrape.com']

    def parse(self, response):
        extractor =LinkExtractor(allow_domains='quotes.toscrape.com')
        links = extractor.extract_links(response)
        for link in links:
            yield scrapy.Request(link.url, self.parse)
        yield {'url': response.url}

但是我怎样才能返回单个值?链接总数。

python scrapy
1个回答
0
投票

有关爬网的统计信息,请使用Scrapy Stats

self.stats.inc_value('link_count')

统计数据将以spider.stats的形式提供。

可以使用metadata() API从ScrapyCloud项目恢复统计信息:

from scrapinghub import ScrapinghubClient

client  = ScrapinghubClient()

pro = client.get_project(<PROJECT_ID>)
job = pro.jobs.get(<JOB_ID>)

stats = job.metadata.get('scrapystats')

.

>>> job.metadata.get('scrapystats')
...
'downloader/response_count': 104,
'downloader/response_status_count/200': 104,
'finish_reason': 'finished',
'finish_time': 1447160494937,
'item_scraped_count': 50,
'log_count/DEBUG': 157,
'log_count/INFO': 1365,
'log_count/WARNING': 3,
'memusage/max': 182988800,
'memusage/startup': 62439424,
...
© www.soinside.com 2019 - 2024. All rights reserved.