在抓取网站后,我已检索到所有html链接。将它们设置为set()后,要删除所有重复项,我仍在检索某些值。如何删除“#”,“#content”,“#uscb-nav-skip-header”的值,'/',无,来自链接集。
from bs4 import BeautifulSoup
import urllib
import re
#Gets the html code for scrapping
r = urllib.request.urlopen('https://www.census.gov/programs-surveys/popest.html').read()
#Creates a beautifulsoup object to run
soup = BeautifulSoup(r, 'html.parser')
#Set removes duplicates
lst2 = set()
for link in soup.find_all('a'):
lst2.add(link.get('href'))
lst2
{'#',
'#content',
'#uscb-nav-skip-header',
'/',
'/data/tables/time-series/demo/popest/pre-1980-county.html',
'/data/tables/time-series/demo/popest/pre-1980-national.html',
'/data/tables/time-series/demo/popest/pre-1980-state.html',
'/en.html',
'/library/publications/2010/demo/p25-1138.html',
'/library/publications/2010/demo/p25-1139.html',
'/library/publications/2015/demo/p25-1142.html',
'/programs-surveys/popest/data.html',
'/programs-surveys/popest/data/tables.html',
'/programs-surveys/popest/geographies.html',
'/programs-surveys/popest/guidance-geographies.html',
None,
'https://twitter.com/uscensusbureau',
...}
尝试以下操作:
'/'
(尽管它是有效的URL),则只需在末尾写lst2.discard('/')
。由于lst2
是一个集合,因此如果存在则将其删除,否则将不执行任何操作。