试图用Python-3.7抓取html的特定部分,但它返回“None”

问题描述 投票:3回答:2

我是一个初学者,编写一些简单的Python代码来从网页中抓取数据。我找到了我想要抓取的html的确切部分,但它一直返回“无”。它适用于网页的其他部分,但不适用于此特定部分

我正在使用BeautifulSoup来解析html,因为我可以删除一些代码,我假设我不需要使用Selenium。但我还是找不到如何刮掉一个特定的部分。

这是我编写的Python代码:

import requests

from bs4 import BeautifulSoup


url = 'https://www.rent.com/new-york/tuckahoe-apartments?page=2'

response = requests.get(url)

html_soup = BeautifulSoup(response.text, 'html.parser')

apt_listings = html_soup.find_all('div', class_='_3RRl_')
print(type(apt_listings))
print(len(apt_listings))

first_apt = apt_listings[0]

first_apt.a

first_add = first_apt.a.text

print(first_add)


apt_rents = html_soup.find_all('div', class_='_3e12V')
print(type(apt_rents))
print(len(apt_rents))

first_rent = apt_rents[0]

print(first_rent)

first_rent = first_rent.find('class', attrs={'data-tid' : 'price'})

print(first_rent)

这是CMD的输出:

<class 'bs4.element.ResultSet'>
30
address not disclosed
<class 'bs4.element.ResultSet'>
30
<div class="_3e12V" data-tid="price">$2,350</div>
None

“未披露的地址”是正确的,并被成功删除。我想要花掉2,350美元,但它一直在回“无”。我认为我接近正确,但我似乎无法获得2,350美元。任何帮助是极大的赞赏。

html web-scraping python-3.7
2个回答
0
投票

你需要使用属性.text of BeautifulSoup而不是像这样的.find():

first_rent = first_rent.text

就如此容易。


0
投票

您可以从脚本标记中提取所有列表并解析为json。正则表达式查找启动window.__APPLICATION_CONTEXT__ =的脚本标记。

enter image description here

之后的字符串是通过正则表达式(.*)中的组提取的。如果字符串加载了json.loads,那javascript对象可以解析为json。

你可以探索json对象here

import requests
import json
from bs4 import BeautifulSoup as bs
import re
base_url = 'https://www.rent.com/'
res = requests.get('https://www.rent.com/new-york/tuckahoe-apartments?page=2')
soup = bs(res.content, 'lxml')
r = re.compile(r'window.__APPLICATION_CONTEXT__ = (.*)')
data = soup.find('script', text=r).text
script = r.findall(data)[0]
items = json.loads(script)['store']['listings']['listings']
results = []

for item in items:   
    address = item['address']
    area = ', '.join([item['city'], item['state'], item['zipCode']])
    low_price = item['aggregates']['prices']['low']
    high_price = item['aggregates']['prices']['high']
    listingId = item['listingId']
    url = base_url + item['listingSeoPath']
    # all_info = item
    record = {'address' : address,
              'area' : area,
              'low_price' : low_price,
              'high_price' : high_price,
              'listingId' : listingId,
              'url' : url}
    results.append(record)
df = pd.DataFrame(results, columns = [ 'address', 'area', 'low_price', 'high_price', 'listingId', 'url'])
print(df)

结果样本:

enter image description here


类的简短版本:

import requests
from bs4 import BeautifulSoup
url = 'https://www.rent.com/new-york/tuckahoe-apartments?page=2'
response = requests.get(url)

soup = BeautifulSoup(response.text, 'html.parser')
print(soup.select_one('._3e12V').text)

所有价格:

import requests
from bs4 import BeautifulSoup
url = 'https://www.rent.com/new-york/tuckahoe-apartments?page=2'
response = requests.get(url)

html_soup = BeautifulSoup(response.text, 'html.parser')
print([item.text for item in html_soup.select('._3e12V')])
© www.soinside.com 2019 - 2024. All rights reserved.