我有一个具有这两个功能的脚本:
# Getting content of each page
def GetContent(url):
response = requests.get(url)
return response.content
# Extracting the sites
def CiteParser(content):
soup = BeautifulSoup(content)
print "---> site #: ",len(soup('cite'))
result = []
for cite in soup.find_all('cite'):
result.append(cite.string.split('/')[0])
return result
当我运行程序时,出现以下错误:
result.append(cite.string.split('/')[0])
AttributeError: 'NoneType' object has no attribute 'split'
输出样本:
URL: <URL That I use to search 'can be google, bing, etc'>
---> site #: 10
site1.com
.
.
.
site10.com
URL: <URL That I use to search 'can be google, bing, etc'>
File "python.py", line 49, in CiteParser
result.append(cite.string.split('/')[0])
AttributeError: 'NoneType' object has no attribute 'split'
可能会发生该字符串内部没有任何内容,而不是“None”类型,所以我可以假设首先检查您的字符串是否不是“None”
# Extracting the sites
def CiteParser(content):
soup = BeautifulSoup(content)
#print soup
print "---> site #: ",len(soup('cite'))
result = []
for cite in soup.find_all('cite'):
if cite.string is not None:
result.append(cite.string.split('/'))
print cite
return result
for cite in soup.find_all('cite'):
if( (cite.string is None) or (len(cite.string) == 0)):
continue
result.append(cite.string.split('/')[0])
我升级了 conda 和 threadpoolctl。成功了。