如何使用请求库从http请求中获取IP地址?

问题描述 投票:22回答:1

我正在使用python中的请求库发出HTTP请求,但我需要来自响应http请求的服务器的IP地址,因此,我试图避免进行两次调用(并且可能与一次调用的IP地址不同)响应了请求。

有可能吗?是否有任何python http库允许我这样做?

ps:我还需要发出HTTPS请求并使用经过身份验证的代理。

更新1:

示例:

import requests

proxies = {
  "http": "http://user:[email protected]:3128",
  "https": "http://user:[email protected]:1080",
}

response = requests.get("http://example.org", proxies=proxies)
response.ip # This doesn't exist, this is just an what I would like to do

然后,我想知道响应中的方法或属性将哪些IP地址请求连接到了。在其他库中,我可以通过找到袜子对象并使用getpeername()方法来做到这一点。

python python-requests pycurl httplib httplib2
1个回答
37
投票

事实证明它涉及其中。

这里是使用requests 1.2.3版的猴子补丁:

_make_request上包装HTTPConnectionPool方法以将来自socket.getpeername()的响应存储在HTTPResponse实例上。

对我来说,在python 2.7.3上,此实例在response.raw._original_response上可用。

from requests.packages.urllib3.connectionpool import HTTPConnectionPool

def _make_request(self,conn,method,url,**kwargs):
    response = self._old_make_request(conn,method,url,**kwargs)
    sock = getattr(conn,'sock',False)
    if sock:
        setattr(response,'peer',sock.getpeername())
    else:
        setattr(response,'peer',None)
    return response

HTTPConnectionPool._old_make_request = HTTPConnectionPool._make_request
HTTPConnectionPool._make_request = _make_request

import requests

r = requests.get('http://www.google.com')
print r.raw._original_response.peer

收益率:

('2a00:1450:4009:809::1017', 80, 0, 0)

啊,如果涉及代理或对响应进行分块,则不会调用HTTPConnectionPool._make_request

所以这是修补httplib.getresponse的新版本:

import httplib

def getresponse(self,*args,**kwargs):
    response = self._old_getresponse(*args,**kwargs)
    if self.sock:
        response.peer = self.sock.getpeername()
    else:
        response.peer = None
    return response


httplib.HTTPConnection._old_getresponse = httplib.HTTPConnection.getresponse
httplib.HTTPConnection.getresponse = getresponse

import requests

def check_peer(resp):
    orig_resp = resp.raw._original_response
    if hasattr(orig_resp,'peer'):
        return getattr(orig_resp,'peer')

正在运行:

>>> r1 = requests.get('http://www.google.com')
>>> check_peer(r1)
('2a00:1450:4009:808::101f', 80, 0, 0)
>>> r2 = requests.get('https://www.google.com')
>>> check_peer(r2)
('2a00:1450:4009:808::101f', 443, 0, 0)
>>> r3 = requests.get('http://wheezyweb.readthedocs.org/en/latest/tutorial.html#what-you-ll-build')
>>> check_peer(r3)
('162.209.99.68', 80)

还检查了是否设置了代理运行;返回代理地址。


更新 2016/01/19

[est提供an alternative that doesn't need the monkey-patch

rsp = requests.get('http://google.com', stream=True)
# grab the IP while you can, before you consume the body!!!!!!!!
print rsp.raw._fp.fp._sock.getpeername()
# consume the body, which calls the read(), after that fileno is no longer available.
print rsp.content  

更新 2016/05/19

[从注释中复制,以提高可见度,Richard Kenneth Niescior提供以下已确认可用于请求2.10.0和Python 3的内容。

rsp=requests.get(..., stream=True)
rsp.raw._connection.sock.getpeername()

更新 2019/02/22

带有请求版本2.19.1的Python3。

resp=requests.get(..., stream=True)
resp.raw._connection.sock.socket.getsockname()

更新 2020/01/31

带有请求2.22.0的Python3.8

resp = requests.get('https://www.google.com', stream=True)
resp.raw._connection.sock.getsockname()
© www.soinside.com 2019 - 2024. All rights reserved.