我试图获取此URL的内容-https://www.zillow.com/homedetails/131-Avenida-Dr-Berkeley-CA-94708/24844204_zpid/我好用这是我的代码。
import scrapy
class QuotesSpider(scrapy.Spider):
name = "quotes"
start_urls = [
'https://www.zillow.com/homedetails/131-Avenida-Dr-Berkeley-CA-94708/24844204_zpid/',
]
def parse(self, response):
filename = 'test.html'
with open(filename, 'wb') as f:
f.write(response.body)
self.log('Saved file %s' % filename)
我打开了抓取的数据(test.html),我得到了这个内容。我试图找到解决方案,并且尝试了-ERROR for site owner: Invalid domain for site key但这并不能解决我的问题。
首先,尝试这种方法,看看是否可行:
Headerz = {
"accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9",
"accept-encoding": "gzip, deflate, br",
"accept-language": "en-US,en;q=0.9",
"cache-control": "no-cache",
"content-type": "application/x-www-form-urlencoded; charset=UTF-8",
"pragma": "no-cache",
"sec-fetch-mode": "navigate",
"sec-fetch-site": "cross-site",
"sec-fetch-user": "?1",
"upgrade-insecure-requests": "1",
"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36",
}
class QuotesSpider(scrapy.Spider):
name = "quotes"
start_urls = [
'https://www.zillow.com/homedetails/131-Avenida-Dr-Berkeley-CA-94708/24844204_zpid/',
]
def start_requests(self):
yield scrapy.Request(start_urls[0], callback=self.parse, headers=Headerz)
def parse(self, response):
filename = 'test.html'
with open(filename, 'wb') as f:
f.write(response.body)
self.log('Saved file %s' % filename)
我们在普通浏览器中看不到输出的原因是,我们没有使用适当的标头,否则标头总是由浏览器发送的。
您需要按照上述代码中的说明添加标头,或者通过在settings.py中对其进行更新。
一种更好的方法是将“旋转代理”存储库与“旋转用户代理”存储库一起使用。