我正在尝试使用Beautiful Soup从Reddit表中删除链接,并且可以成功提取除URL之外的所有表的内容。我正在使用item.find_all('a')
,但在使用此代码时它返回一个空列表:
import praw
import csv
import requests
from bs4 import BeautifulSoup
def Authorize():
"""Authorizes Reddit API"""
reddit = praw.Reddit(client_id='',
client_secret='',
username='',
password='',
user_agent='user')
url = 'https://old.reddit.com/r/formattesting/comments/94nc49/will_it_work/'
headers = {'User-Agent': 'Mozilla/5.0'}
page = requests.get(url, headers=headers)
soup = BeautifulSoup(page.text, 'html.parser')
table_extract = soup.find_all('table')[0]
table_extract_items = table_extract.find_all('a')
for item in table_extract_items:
letter_name = item.contents[0]
links = item.find_all('a')
print(letter_name)
print(links)
这是它返回的内容:
6GB EVGA GTX 980 TI
[]
Intel i7-4790K
[]
Asus Z97-K Motherboard
[]
2x8 HyperX Fury DDR3 RAM
[]
Elagto HD 60 Pro Capture Card
[]
我希望有一个URL,其中空列表位于每个表行下方。
我不确定这是否会对构造产生影响,但最终目标是提取所有表内容和链接(保持两者之间的关联)并将CSV保存为两列。但是现在我只是想让print
保持简单。
你快到了。你的table_extract_items
是HTML锚点,你需要使用text
href
运算符从中提取[
- 内容和属性]
。我猜变量名称选择不当让你感到困惑。 for循环links = item.find_all('a')
里面的线是错误的!
这是我的解决方案:
for anchor in table.findAll('a'):
# if not anchor: finaAll returns empty list, .find() return None
# continue
href = anchor['href']
print (href)
print (anchor.text)
我的代码中的table
就是你在代码中命名为table_extract
的东西
检查一下:
In [40]: for anchor in table.findAll('a'):
# if not anchor:
# continue
href = anchor['href']
text = anchor.text
print (href, "--", text)
....:
https://imgur.com/a/Y1WlDiK -- 6GB EVGA GTX 980 TI
https://imgur.com/gallery/yxkPF3g -- Intel i7-4790K
https://imgur.com/gallery/nUKnya3 -- Asus Z97-K Motherboard
https://imgur.com/gallery/9YIU19P -- 2x8 HyperX Fury DDR3 RAM
https://imgur.com/gallery/pNqXC2z -- Elagto HD 60 Pro Capture Card
https://imgur.com/gallery/5K3bqMp -- Samsung EVO 250 GB SSD
https://imgur.com/FO8JoQO -- Corsair Scimtar MMO Mouse
https://imgur.com/C8PFsX0 -- Corsair K70 RGB Rapidfire Keyboard
https://imgur.com/hfCEzMA -- I messed up