尽管网站有 50 个页面,但我想使用以下代码将抓取的页面数量限制为 5。我正在使用 Scrapy 的 CrawlSpider。我怎样才能做到这一点?
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
class BooksSpider(CrawlSpider):
name = "bookscraper"
allowed_domains = ["books.toscrape.com"]
start_urls = ["https://books.toscrape.com/"]
rules = (Rule(LinkExtractor(restrict_xpaths='//h3/a'), callback='parse_item', follow=True),
Rule(LinkExtractor(restrict_xpaths='//li[@class="next"]/a'), follow=True),)
def parse_item(self, response):
product_info = response.xpath('//table[contains(@class, "table-striped")]')
name = response.xpath('//h1/text()').get()
upc = product_info.xpath('(./tr/td)[1]/text()').get()
price = product_info.xpath('(./tr/td)[3]/text()').get()
availability = product_info.xpath('(./tr/td)[6]/text()').get()
yield {'Name': name, 'UPC': upc, 'Availability': availability, 'Price': price}
您可以使用 CLOSESPIDER_PAGECOUNT 设置。这将是正确的方法 https://docs.scrapy.org/en/latest/topics/extensions.html#module-scrapy.extensions.closespider
在项目设置中使用此设置,或者您可以覆盖蜘蛛中的设置(如果您仅将此设置用于此具体蜘蛛)