<td>
<input type="hidden" name="ctl00$ContentPlaceHolder1$dlstCollege$ctl01$hdnInstituteId" id="ContentPlaceHolder1_dlstCollege_hdnInstituteId_1" value="866 " />
<a id="ContentPlaceHolder1_dlstCollege_hlpkInstituteName_1" href="CollegeDetailedInformation.aspx?Inst=866 ">A.N.A INSTITUTE OF PHARMACEUTICAL SCIENCES & RESEARCH,BAREILLY (866)</a>
<br />
<b>Location:</b>
<span id="ContentPlaceHolder1_dlstCollege_lblAddress_1">13.5 km Bareilly - Delhi road, near rubber factory agras road ,Bareilly</span>
<br />
<b>Course:</b>
<span id="ContentPlaceHolder1_dlstCollege_lblCourse_1">B.Pharm,</span>
<br />
<b>Category:</b>
<span id="ContentPlaceHolder1_dlstCollege_lblInstituteType_1">Private</span>
<br />
<b>Web Address:</b>
<a id="lnkBtnWebURL" href='' target="_blank"></a>
<br />
</td>
</tr>
<tr>
<td>
<input type="hidden" name="ctl00$ContentPlaceHolder1$dlstCollege$ctl02$hdnInstituteId" id="ContentPlaceHolder1_dlstCollege_hdnInstituteId_2" value="486 " />
<a id="ContentPlaceHolder1_dlstCollege_hlpkInstituteName_2" href="CollegeDetailedInformation.aspx?Inst=486 ">A.N.A.COLLEGE OF ENGINEERING & MANAGEMENT,BAREILLY (486)</a>
<br />
<b>Location:</b>
<span id="ContentPlaceHolder1_dlstCollege_lblAddress_2">13.5 Km. NH-24, Bareilly-Delhi Highway, Near Rubber Factory, Bareilly</span>
<br />
<b>Course:</b>
<span id="ContentPlaceHolder1_dlstCollege_lblCourse_2">B.Tech,M.Tech,</span>
<br />
<b>Category:</b>
<span id="ContentPlaceHolder1_dlstCollege_lblInstituteType_2">Private</span>
<br />
<b>Web Address:</b>
<a id="lnkBtnWebURL" href='http://www.anacollege.org/index.html' target="_blank">http://www.anacollege.org/index.html</a>
<br />
</td>
</tr>
我想从该网站中提取一个特定的URL(例如:CollegeDetailedInformation.aspx?Inst = 866),但是此代码具有两个我不想要的标签(例如:http://www.anacollege.org/index.html)。
res = requests.get('https://erp.aktu.ac.in/WebPages/KYC/CollegeList.aspx?City=&CType=&Cu=&Br=&Inst=&IType=')
soup = BeautifulSoup(res.content, 'html.parser')
table = soup.find("table", attrs = {'class':'table table-bordered table-responsive'})
pagelink = []
for anchor in table.findAll('a')[1:]:
link = anchor['href']
print(link)
url = 'https://erp.aktu.ac.in/WebPages/KYC/'+ link
pagelink.append(url)
print(pagelinks)
我写了这段代码,但是它正在提取所有链接
CollegeDetailedInformation.aspx?Inst=486
http://www.anacollege.org/index.html
CollegeDetailedInformation.aspx?Inst=602
http://www.aashlarbschool.com
CollegeDetailedInformation.aspx?Inst=032
http://www.abes.ac.in
CollegeDetailedInformation.aspx?Inst=290
http://www.abesit.in
CollegeDetailedInformation.aspx?Inst=913
http://www.abesitpharmacy.in
CollegeDetailedInformation.aspx?Inst=643
http://www.vitsald.com
CollegeDetailedInformation.aspx?Inst=1036
http://www.abss.edu.in
我该如何解决,我只想要与CollegeDetailedInformation.aspx?Inst =的链接?部分。
我不了解Python,但一般规则是在for循环中填充一个数组,然后查找具有过滤器的子字符串,选择索引并获取该索引中的所有内容。
在循环外初始化并清空数组(如果在 Python),将其填充到循环中,然后执行类似in_array(用于 php)作为过滤器:CollegeDetailedInformation.aspx?Inst =?。
[这应该是一个好的开始,因为Python的高手会提供帮助。
尝试以下代码段。继续安装**lxml**
库和pip,然后继续
import requests as rq
from bs4 import BeautifulSoup as bs
es = rq.get('https://erp.aktu.ac.in/WebPages/KYC/CollegeList.aspx?City=&CType=&Cu=&Br=&Inst=&IType=')
soup = bs(res.content, 'lxml')
table = soup.find("table", attrs = {'class':'table table-bordered table-responsive'})
links = [elem.strip() for anchor in table.findAll('a') for _,elem in anchor.attrs.items() if "=" in elem]
print(links)
作为查看大学详细信息的链接的锚点元素具有一个以id
开头的ContentPlaceHolder1_dlstCollege_
属性。因此,将其作为regex to the attrs
argument的attrs
传递:
find_all()
您也可以将其作为import re
for anchor in table.findAll('a', attrs={"id": re.compile("^ContentPlaceHolder1_dlstCollege_.*")}):
...
传递给id
keyword argument:
id
((我会删除您放在末尾的find_all()
,因为这可能会过滤掉您不需要的开头的链接。如果没有,请重新添加。)