如何在网页抓取时搜索特定的unicode字符串?

问题描述 投票:2回答:1

我最近对Python上的Web抓取感兴趣,并在一些简单的例子上做了,但我不知道如何处理不遵循ASCII代码的其他语言。例如,在HTML文件中搜索特定字符串或使用这些字符串写入文件。

from urllib.parse import urljoin
import requests
import bs4
website = 'http://book.iranseda.ir'
book_url = 'http://book.iranseda.ir/DetailsAlbum/?VALID=TRUE&g=209103'

soup1 = bs4.BeautifulSoup(requests.get(book_url).text, 'lxml')
match1 = soup1.find_all('a', class_='download-mp3')
for m in match1:
    m = m['href'].replace('q=10', 'q=9')
    url = urljoin(website, m)
    print(url)
    print()

book_url下查看此网站,每行都有不同的文字,但文字是波斯语。假设我需要考虑最后一行。文本是“صدایکلکتاب”如何在<li><div><a>标签中搜索此字符串?

python web-scraping beautifulsoup non-ascii-characters
1个回答
0
投票

您需要将requests的编码设置为UTF-8。看起来requests模块没有使用你想要的解码。正如this SO post中提到的,您可以告诉请求期望的编码。

from urllib.parse import urljoin
import requests
import bs4
website = 'http://book.iranseda.ir'
book_url = 'http://book.iranseda.ir/DetailsAlbum/?VALID=TRUE&g=209103'

req = requests.get(book_url)
req.encoding = 'UTF-8'
soup1 = bs4.BeautifulSoup(req.text, 'lxml')
match1 = soup1.find_all('a', class_='download-mp3')
for m in match1:
    m = m['href'].replace('q=10', 'q=9')
    url = urljoin(website, m)
    print(url)
    print()

这里唯一的变化是

req = requests.get(book_url)
req.encoding = 'UTF-8'
soup1 = bs4.BeautifulSoup(req.text, 'lxml')
© www.soinside.com 2019 - 2024. All rights reserved.