在 python 中使用 request.get 向特斯拉库存 API 请求超时

问题描述 投票:0回答:1

我正在为 Tesla 库存编写一个 Python 网络爬虫

url = "https://www.tesla.com/inventory/api/v1/inventory-results?query={%22query%22:{%22model%22:%22my%22,%22condition%22:%22new%22,%22options%22:{%22TRIM%22:[%22LRAWD%22],%22AUTOPILOT%22:[%22AUTOPILOT%22]},%22arrangeby%22:%22Price%22,%22order%22:%22asc%22,%22market%22:%22US%22,%22language%22:%22en%22,%22super_region%22:%22north%20america%22,%22lng%22:-122.1257,%22lat%22:47.6722,%22zip%22:%2294401%22,%22range%22:100},%22offset%22:0,%22count%22:50,%22outsideOffset%22:0,%22outsideSearch%22:false}"
resp = requests.get(url, timeout=30)

我总是遇到超时错误。但是,当我将 url 粘贴到 Chrome 浏览器时,我可以非常快地获得 json 格式的响应。我想知道我错过了什么。

python timeout web-crawler
1个回答
0
投票

即使使用基本的用户代理,我也得到

200
,一个 55 个匹配的 json。

from pprint import pp
import requests


headers = {'User-Agent':'Mozilla/5.0'}
url = "https://www.tesla.com/inventory/api/v1/inventory-results?query={%22query%22:{%22model%22:%22my%22,%22condition%22:%22new%22,%22options%22:{%22TRIM%22:[%22LRAWD%22],%22AUTOPILOT%22:[%22AUTOPILOT%22]},%22arrangeby%22:%22Price%22,%22order%22:%22asc%22,%22market%22:%22US%22,%22language%22:%22en%22,%22super_region%22:%22north%20america%22,%22lng%22:-122.1257,%22lat%22:47.6722,%22zip%22:%2294401%22,%22range%22:100},%22offset%22:0,%22count%22:50,%22outsideOffset%22:0,%22outsideSearch%22:false}"
resp = requests.get(url, headers=headers, timeout=5)
print(resp)
pp(resp.json(), depth=1)
© www.soinside.com 2019 - 2024. All rights reserved.