Scrapy LinkExtractor错误的链接刮擦

问题描述 投票:0回答:1

在LinkExtractor中使用Scrapy规则时,在与正则表达式匹配的页面中找到的链接不太正确。我可能错过了一些明显的东西,但我看不出来......

从与我的正则表达式匹配的页面中拉出的所有链接都是正确的,但是在链接的末尾有一个“=”符号。我究竟做错了什么?

网址刮痧:

http://rotoguru1.com/cgi-bin/hstats.cgi?pos=0&sort=1&game=k&colA=0&daypt=0&xavg=3&show=1&fltr=00

我想要抓取的链接示​​例:

<a href="playrh.cgi?3986">Durant, Kevin</a>

我的规则/链接提取器/正则表达式:

rules = [ # <a href="playrh.cgi?3986">Durant, Kevin</a>
    Rule(LinkExtractor(r'playrh\.cgi\?[0-9]{4}$'),
         callback='parse_player',
         follow=False
    )
]

已删除的URL(取自parse_player响应对象):

'http://rotoguru1.com/cgi-bin/playrh.cgi?4496='

注意附加到URL末尾的额外'='!

谢谢!


当然,这是我的日志......

据我所知,没有重定向发生,但是那个讨厌的'='正在以某种方式结束或请求URL ......

我现在要探索'链接处理'来解决这个问题,但是我想深入研究这个问题。

谢谢!

Testing started at 10:24 AM ...
pydev debugger: process 1352 is connecting

Connected to pydev debugger (build 143.1919)
2016-02-17 10:24:57,789: INFO    >>  Scrapy 1.0.3 started (bot: Scraper)
2016-02-17 10:24:57,789: INFO    >>  Optional features available: ssl, http11
2016-02-17 10:24:57,790: INFO    >>  Overridden settings: {'NEWSPIDER_MODULE': 'Scraper.spiders', 'LOG_ENABLED': False, 'SPIDER_MODULES': ['Scraper.spiders'], 'CONCURRENT_REQUESTS': 128, 'BOT_NAME': 'Scraper'}
2016-02-17 10:24:57,904: INFO    >>  Enabled extensions: CloseSpider, TelnetConsole, LogStats, CoreStats, SpiderState
2016-02-17 10:24:58,384: INFO    >>  Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2016-02-17 10:24:58,388: INFO    >>  Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2016-02-17 10:24:58,417: INFO    >>  Enabled item pipelines: MongoOutPipeline
2016-02-17 10:24:58,420: INFO    >>  Spider opened
2016-02-17 10:24:58,424: INFO    >>  Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-02-17 10:24:58,427: DEBUG   >>  spider_opened (NbaRotoGuruDfsPerformanceSpider) : 'NbaRotoGuruDfsPerformanceSpider'
2016-02-17 10:24:58,428: DEBUG   >>  Telnet console listening on 127.0.0.1:6023
2016-02-17 10:24:59,957: DEBUG   >>  Crawled (200) <GET http://rotoguru1.com/cgi-bin/hstats.cgi?pos=0&sort=1&game=k&colA=0&daypt=0&xavg=3&show=1&fltr=00> (referer: None)
2016-02-17 10:25:01,130: DEBUG   >>  Crawled (200) <GET http://rotoguru1.com/cgi-bin/playrh.cgi?4496=> (referer: http://rotoguru1.com/cgi-bin/hstats.cgi?pos=0&sort=1&game=k&colA=0&daypt=0&xavg=3&show=1&fltr=00)
**********************************>> CUT OUT ABOUT 550 LINES HERE FOR BREVITY (Just links same as directly above/below) *********************************>>
2016-02-17 10:25:28,983: DEBUG   >>  Crawled (200) <GET http://rotoguru1.com/cgi-bin/playrh.cgi?4632=> (referer: http://rotoguru1.com/cgi-bin/hstats.cgi?pos=0&sort=1&game=k&colA=0&daypt=0&xavg=3&show=1&fltr=00)
2016-02-17 10:25:28,987: DEBUG   >>  Crawled (200) <GET http://rotoguru1.com/cgi-bin/playrh.cgi?3527=> (referer: http://rotoguru1.com/cgi-bin/hstats.cgi?pos=0&sort=1&game=k&colA=0&daypt=0&xavg=3&show=1&fltr=00)
2016-02-17 10:25:29,400: DEBUG   >>  Crawled (200) <GET http://rotoguru1.com/cgi-bin/playrh.cgi?4564=> (referer: http://rotoguru1.com/cgi-bin/hstats.cgi?pos=0&sort=1&game=k&colA=0&daypt=0&xavg=3&show=1&fltr=00)
2016-02-17 10:25:29,581: INFO    >>  Closing spider (finished)
2016-02-17 10:25:29,585: INFO    >>  Dumping Scrapy stats:
{'downloader/request_bytes': 194884,
 'downloader/request_count': 570,
 'downloader/request_method_count/GET': 570,
 'downloader/response_bytes': 5886991,
 'downloader/response_count': 570,
 'downloader/response_status_count/200': 570,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2016, 2, 17, 15, 25, 29, 582000),
 'log_count/DEBUG': 572,
 'log_count/INFO': 7,
 'request_depth_max': 1,
 'response_received_count': 570,
 'scheduler/dequeued': 570,
 'scheduler/dequeued/memory': 570,
 'scheduler/enqueued': 570,
 'scheduler/enqueued/memory': 570,
 'start_time': datetime.datetime(2016, 2, 17, 15, 24, 58, 424000)}
2016-02-17 10:25:29,585: INFO    >>  Spider closed (finished)
Process finished with exit code 0
regex hyperlink scrapy rules
1个回答
0
投票

以下代码段通过修剪链接中的流氓'='符号来实现

...

rules = [
    Rule(LinkExtractor(r'playrh\.cgi\?[0-9]{4}'),
         process_links='process_links',
         callback='parse_player',
         follow=False
    )
]

...

def process_links(self, links):
    for link in links:
        link.url = link.url.replace('=','')
    return links

...
© www.soinside.com 2019 - 2024. All rights reserved.