Python的BeautifulSoup - 刮去鉴于网址iframe出现多个网页

问题描述 投票:1回答:1

我们有这样的代码(感谢科迪和Alex Tereshenkov):

import pandas as pd
import requests
from bs4 import BeautifulSoup

pd.set_option('display.width', 1000)
pd.set_option('display.max_columns', 50)

url = "https://www.aliexpress.com/store/feedback-score/1665279.html"
s = requests.Session()
r = s.get(url)

soup = BeautifulSoup(r.content, "html.parser")
iframe_src = soup.select_one("#detail-displayer").attrs["src"]

r = s.get(f"https:{iframe_src}")

soup = BeautifulSoup(r.content, "html.parser")
rows = []
for row in soup.select(".history-tb tr"):
    #print("\t".join([e.text for e in row.select("th, td")]))
    rows.append([e.text for e in row.select("th, td")])
#print

df = pd.DataFrame.from_records(
    rows,
    columns=['Feedback', '1 Month', '3 Months', '6 Months'],
)

# remove first row with column names
df = df.iloc[1:]
df['Shop'] = url.split('/')[-1].split('.')[0]

pivot = df.pivot(index='Shop', columns='Feedback')
pivot.columns = [' '.join(col).strip() for col in pivot.columns.values]

column_mapping = dict(
    zip(pivot.columns.tolist(), [col[:12] for col in pivot.columns.tolist()]))
# column_mapping
# {'1 Month Negative (1-2 Stars)': '1 Month Nega',
#  '1 Month Neutral (3 Stars)': '1 Month Neut',
#  '1 Month Positive (4-5 Stars)': '1 Month Posi',
#  '1 Month Positive feedback rate': '1 Month Posi',
#  '3 Months Negative (1-2 Stars)': '3 Months Neg',
#  '3 Months Neutral (3 Stars)': '3 Months Neu',
#  '3 Months Positive (4-5 Stars)': '3 Months Pos',
#  '3 Months Positive feedback rate': '3 Months Pos',
#  '6 Months Negative (1-2 Stars)': '6 Months Neg',
#  '6 Months Neutral (3 Stars)': '6 Months Neu',
#  '6 Months Positive (4-5 Stars)': '6 Months Pos',
#  '6 Months Positive feedback rate': '6 Months Pos'}
pivot.columns = [column_mapping[col] for col in pivot.columns]

pivot.to_excel('Report.xlsx')

代码提取用于给定的URL(其是iframe内),并且所有的表数据转换成1条线,酷似此为“反馈历史”表:

image


而在另一方面,我们在同一个项目文件夹(“urls.txt”)的50个URL像这样的文件:

https://www.aliexpress.com/store/feedback-score/4385007.html
https://www.aliexpress.com/store/feedback-score/1473089.html
https://www.aliexpress.com/store/feedback-score/3085095.html
https://www.aliexpress.com/store/feedback-score/2793002.html
https://www.aliexpress.com/store/feedback-score/4656043.html
https://www.aliexpress.com/store/feedback-score/4564021.html

我们只需要提取相同的数据文件中的所有网址。

我们该怎么做呢?

python pandas web-scraping beautifulsoup
1个回答
2
投票

但由于URL的数量为约50,你可以只读取URL转换成一个列表,然后将请求发送到每个URL。我刚才测试了这些6个网址和解决方案适用于他们。但是你可能需要添加一些尝试,除了可能发生的任何异常。

import pandas as pd
import requests
from bs4 import BeautifulSoup
with open('urls.txt','r') as f:
    urls=f.readlines()
master_list=[]
for idx,url in enumerate(urls):
    s = requests.Session()
    r = s.get(url)
    soup = BeautifulSoup(r.content, "html.parser")
    iframe_src = soup.select_one("#detail-displayer").attrs["src"]
    r = s.get(f"https:{iframe_src}")
    soup = BeautifulSoup(r.content, "html.parser")
    rows = []
    for row in soup.select(".history-tb tr"):
        rows.append([e.text for e in row.select("th, td")])
    df = pd.DataFrame.from_records(
        rows,
        columns=['Feedback', '1 Month', '3 Months', '6 Months'],
    )

    df = df.iloc[1:]
    shop=url.split('/')[-1].split('.')[0]
    df['Shop'] = shop
    pivot = df.pivot(index='Shop', columns='Feedback')
    master_list.append([shop]+pivot.values.tolist()[0])
    if idx == len(urls) - 1: #last item in the list
        final_output=pd.DataFrame(master_list)
        pivot.columns = [' '.join(col).strip() for col in pivot.columns.values]
        column_mapping = dict(zip(pivot.columns.tolist(), [col[:12] for col in pivot.columns.tolist()]))
        final_output.columns = ['Shop']+[column_mapping[col] for col in pivot.columns]
        final_output.set_index('Shop', inplace=True)
final_output.to_excel('Report.xlsx') 

输出:

enter image description here

也许是更好的解决方案,你可以考虑的是避免使用熊猫都没有。你得到的数据后,你可以操纵它来获得一个列表,然后使用XlsxWriter

© www.soinside.com 2019 - 2024. All rights reserved.