Scrapy在关注链接时添加了不需要的前缀链接

问题描述 投票:0回答:1
2019-03-17 17:21:06 [scrapy.core.engine] DEBUG: Crawled (404) <GET http://www.google.com/www.distancesto.com/coordinates/de/jugenheim-in-rheinhessen-latitude-longitude/history/401814.html> (referer: http://www.google.com/search?q=Rheinhessen+Germany+coordinates+longitude+latitude+distancesto)
2019-03-17 17:21:06 [scrapy.core.scraper] DEBUG: Scraped from <404 http://www.google.com/www.distancesto.com/coordinates/de/jugenheim-in-rheinhessen-latitude-longitude/history/401814.html>

因此,不是遵循'www.distancesto.com/coordinates/de/jugenheim-in-rheinhessen-latitude-longitude/history/401814.html',而是在之前添加'http://www.google.com/',并且显然会在断开的链接中返回。这超出了我,我无法理解为什么。响应没有那个,我甚至试图在22个字符(不需要的preifx长度)之后返回并且它擦除了部分真实链接。

class Googlelocs(Spider):


name = 'googlelocs'
start_urls = []

for i in appellation_list:
    baseurl =  i.replace(',', '').replace(' ', '+')
    cleaned_href = f'http://www.google.com/search?q={baseurl}+coordinates+longitude+latitude+distancesto'
    start_urls.append(cleaned_href)



def parse(self, response):


    cleaned_href = response.xpath('//*[@id="ires"]/ol/div[1]/h3/a').get().split('https://')[1].split('&')[0]
    yield response.follow(cleaned_href, self.parse_distancesto)


def parse_distancesto(self, response):
    items = GooglelocItem()

    items['appellation'] = response.xpath('string(/html/body/div[3]/div/div[2]/div[3]/div[2]/p/strong)').get()
    items['latitude'] = response.xpath('string(/html/body/div[3]/div/div[2]/div[3]/div[3]/table/tbody/tr[1]/td)').get()
    items['longitude'] = response.xpath('string(/html/body/div[3]/div/div[2]/div[3]/div[3]/table/tbody/tr[2]/td)').get()
    items['elevation'] = response.xpath('string(/html/body/div[3]/div/div[2]/div[3]/div[3]/table/tbody/tr[10]/td)').get()

    yield items

这是蜘蛛。

python web-scraping scrapy
1个回答
0
投票

我找到了答案。

href = response.xpath('// * [@ id =“ires”] / ol / div [1] / h3 / a / @ href')。get()

这是从谷歌获取href的正确途径。此外,我不得不接受谷歌掩盖的链接,而不试图修改它以便能够遵循它。

© www.soinside.com 2019 - 2024. All rights reserved.