我在这个美好的隔离期间正在研究一个小项目。
[我正在尝试从Google新闻(例如疫苗)中提取搜索结果,并根据收集到的标题提供一些情感分析。
到目前为止,我似乎找不到正确的标签来收集标题。
这是我的代码。
from textblob import TextBlob
import requests
from bs4 import BeautifulSoup
class Analysis:
def __init__(self, term):
self.term = term
self.subjectivity = 0
self.sentiment = 0
self.url = 'https://www.google.com/search?q={0}&source=lnms&tbm=nws'.format(self.term)
def run (self):
response = requests.get(self.url)
print(response.text)
soup = BeautifulSoup(response.text, 'html.parser')
headline_results = soup.find_all('div', class_="phYMDf nDgy9d")
for h in headline_results:
blob = TextBlob(h.get_text())
self.sentiment += blob.sentiment.polarity / len(headline_results)
self.subjectivity += blob.sentiment.subjectivity / len(headline_results)
a = Analysis('Vaccine')
a.run()
print(a.term, 'Subjectivity: ', a.subjectivity, 'Sentiment: ' , a.sentiment)
结果始终是0代表情感,0代表主观感。我觉得问题出在class _ =“ phYMDf nDgy9d”。
任何帮助将不胜感激。
Namaste
如果浏览到该链接,您将看到页面的完成状态,但是requests.get
不会执行或加载除您请求的页面以外的任何其他数据。幸运的是,有一些数据,您可以抓取。我建议您使用codebeautify之类的html修饰符服务来更好地了解页面结构是什么。
此外,如果您看到类似phYMDf nDgy9d
的类,请务必避免与它们一起找到。它们是类的精简版本,因此,只要它们更改CSS代码的一部分,您正在寻找的类都将获得一个新名称。
[我做的可能太过分了,但是我设法深入研究以刮取特定部分,并且您的代码现在可以正常工作。
[当您查看请求的html文件的更漂亮的版本时,必要的内容在div中,其ID为main
,如上所示。然后,它的子级以div元素Google搜索开始,以style
元素继续,在一个空div元素之后,还有post div元素。该子级列表中的最后两个元素是footer
和script
元素。我们可以用[3:-2]
截断这些数据,然后在那棵树下得到纯数据(相当多)。如果您在posts
变量之后检查代码的其余部分,我认为您可以理解。
这里是代码:
from textblob import TextBlob
import requests, re
from bs4 import BeautifulSoup
from pprint import pprint
class Analysis:
def __init__(self, term):
self.term = term
self.subjectivity = 0
self.sentiment = 0
self.url = 'https://www.google.com/search?q={0}&source=lnms&tbm=nws'.format(self.term)
def run (self):
response = requests.get(self.url)
#print(response.text)
soup = BeautifulSoup(response.text, 'html.parser')
mainDiv = soup.find("div", {"id": "main"})
posts = [i for i in mainDiv.children][3:-2]
news = []
for post in posts:
reg = re.compile(r"^/url.*")
cursor = post.findAll("a", {"href": reg})
postData = {}
postData["headline"] = cursor[0].find("div").get_text()
postData["source"] = cursor[0].findAll("div")[1].get_text()
postData["timeAgo"] = cursor[1].next_sibling.find("span").get_text()
postData["description"] = cursor[1].next_sibling.find("span").parent.get_text().split("· ")[1]
news.append(postData)
pprint(news)
for h in news:
blob = TextBlob(h["headline"] + " "+ h["description"])
self.sentiment += blob.sentiment.polarity / len(news)
self.subjectivity += blob.sentiment.subjectivity / len(news)
a = Analysis('Vaccine')
a.run()
print(a.term, 'Subjectivity: ', a.subjectivity, 'Sentiment: ' , a.sentiment)
一些输出:
[{'description': 'It comes after US health officials said last week they had '
'started a trial to evaluate a possible vaccine in Seattle. '
'The Chinese effort began on...',
'headline': 'China embarks on clinical trial for virus vaccine',
'source': 'The Star Online',
'timeAgo': '5 saat önce'},
{'description': 'Hanneke Schuitemaker, who is leading a team working on a '
'Covid-19 vaccine, tells of the latest developments and what '
'needs to be done now.',
'headline': 'Vaccine scientist: ‘Everything is so new in dealing with this '
'coronavirus’',
'source': 'The Guardian',
'timeAgo': '20 saat önce'},
.
.
.
Vaccine Subjectivity: 0.34522727272727277 Sentiment: 0.14404040404040402
[{'description': '10 Cool Tech Gadgets To Survive Working From Home. From '
'Wi-Fi and cell phone signal boosters, to noise-cancelling '
'headphones and gadgets...',
'headline': '10 Cool Tech Gadgets To Survive Working From Home',
'source': 'CRN',
'timeAgo': '2 gün önce'},
{'description': 'Over the past few years, smart home products have dominated '
'the gadget space, with goods ranging from innovative updates '
'to the items we...',
'headline': '6 Smart Home Gadgets That Are Actually Worth Owning',
'source': 'Entrepreneur',
'timeAgo': '2 hafta önce'},
.
.
.
Home Gadgets Subjectivity: 0.48007305194805205 Sentiment: 0.3114683441558441
我使用标题和描述数据进行操作,但是如果需要,您可以进行操作。您现在有了数据:)
使用此
headline_results = soup.find_all('div', {'class' : 'BNeawe vvjwJb AP7Wnd'})
您已经打印了response.text,如果要查找特定数据,请从response.text结果中搜索