有需要可以在网站上搜索一下
url = r'http://www.cpso.on.ca/docsearch/'
这是一个 aspx 页面(我从昨天开始这个长途跋涉,抱歉新手问题)
使用 BeautifulSoup,我可以获得 __VIEWSTATE 和 __EVENTVALIDATION,如下所示:
viewstate = soup.find('input', {'id' : '__VIEWSTATE'})['value']
eventval = soup.find('input', {'id' : '__EVENTVALIDATION'})['value']
标题可以这样设置:
headers = {'HTTP_USER_AGENT': 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.13) Gecko/2009073022 Firefox/3.0.13',
'HTTP_ACCEPT': 'text/html,application/xhtml+xml,application/xml; q=0.9,*/*; q=0.8',
'Content-Type': 'application/x-www-form-urlencoded'}
如果您访问网页,我真正想传递的唯一值是名字和姓氏......
LN = "smith"
FN = "a"
data = {"__VIEWSTATE":viewstate,"__EVENTVALIDATION":ev,
"ctl00$ContentPlaceHolder1$MainContentControl1$ctl00$txtLastName":LN,
"ctl00$ContentPlaceHolder1$MainContentControl1$ctl00$txtFirstName":FN}
所以把它们放在一起就像这样:
import urllib
import urllib2
import urlparse
import BeautifulSoup
url = r'http://www.cpso.on.ca/docsearch/'
html = urllib2.urlopen(url).read()
soup = BeautifulSoup.BeautifulSoup(html)
viewstate = soup.find('input', {'id' : '__VIEWSTATE'})['value']
ev = soup.find('input', {'id' : '__EVENTVALIDATION'})['value']
headers = {'HTTP_USER_AGENT': 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.13) Gecko/2009073022 Firefox/3.0.13',
'HTTP_ACCEPT': 'text/html,application/xhtml+xml,application/xml; q=0.9,*/*; q=0.8',
'Content-Type': 'application/x-www-form-urlencoded'}
LN = "smith"
FN = "a"
data = {"__VIEWSTATE":viewstate,"__EVENTVALIDATION":ev,
"ctl00$ContentPlaceHolder1$MainContentControl1$ctl00$txtLastName":LN,
"ctl00$ContentPlaceHolder1$MainContentControl1$ctl00$txtFirstName":FN}
data = urllib.urlencode(data)
request = urllib2.Request(url,data,headers)
response = urllib2.urlopen(request)
newsoup = BeautifulSoup.BeautifulSoup(response)
for i in newsoup:
print i
问题是它似乎并没有真正给我结果...不知道我是否需要为表单中的每个文本框提供每个值或者什么...也许我只是做得不正确。不管怎样,只是希望有人能纠正我。我以为我有它,但我希望看到医生名单和联系信息。
非常感谢任何见解,我以前使用过 beautifulsoup,但我认为我的问题只是发送请求并在数据部分包含正确数量的信息。
谢谢!
听取了@pguardiario的建议并走了机械化路线......更简单
import mechanize
url = r'http://www.cpso.on.ca/docsearch/'
request = mechanize.Request(url)
response = mechanize.urlopen(request)
forms = mechanize.ParseResponse(response, backwards_compat=False)
response.close()
form = forms[0]
form['ctl00$ContentPlaceHolder1$MainContentControl1$ctl00$txtLastName']='Smith'
form['ctl00$ContentPlaceHolder1$MainContentControl1$ctl00$txtPostalCode']='K1H'
print mechanize.urlopen(form.click()).read()
我距离完成还有很长的路要走,但这让我走得更远。