硒的Web刮 - 为什么这个脚本返回50万行?

问题描述 投票:1回答:1

我做了一个脚本,通过网站抓取某些类别的所有产品信息,刮,但是当有某些类别的只有3000个项目我的代码返回500个000多个行。

我也真的很新的Python的所以任何帮助表示赞赏。

代码附加如下:

    # -*- coding: utf-8 -*-
"""
Created on Mon Feb  4 20:31:23 2019

@author: 
"""
import requests
from selenium import webdriver
from selenium.webdriver.common.by import By
import selenium.webdriver.support.ui as ui
import selenium.webdriver.support.expected_conditions as EC
from bs4 import BeautifulSoup
import os, sys
import time
from urllib.parse import urljoin
import pandas as pd
import re
import numpy as np

# base set up

options = webdriver.ChromeOptions()
options.add_argument('--ignore-certificate-errors')
options.add_argument('--ignore-ssl-errors')
os.chdir("C:/Users/user/desktop/scripts/python")
cwd = os.getcwd()
main_dir = os.path.abspath(os.path.join(cwd, os.pardir))
print('Main Directory:', main_dir)

chromedriver = ("C:/Users/user/desktop/scripts/python/chromedriver.exe")
os.environ["webdriver.chrome.driver"] = chromedriver
# browser = webdriver.Chrome(options=options, executable_path=chromedriver)

mainurl = "https://www.bunnings.com.au/our-range"

headers = {'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.94 Safari/537.36'}
page = requests.get(mainurl, headers=headers)
soup = BeautifulSoup(page.content, 'html.parser')

# script start

subcat = []
for item in soup.findAll('ul', attrs={'class': 'chalkboard-menu'}):
    links = item.find_all('a')
    for link in links:
        subcat.append(urljoin(mainurl, link.get("href")))
subcat

result = pd.DataFrame()
for adrs in subcat[0:1]:
#    headers = {'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.94 Safari/537.36'}
#    page = requests.get(adrs, headers=headers)
#    soup = BeautifulSoup(page.content, 'html.parser')
#    pagelink = adrs
#    adrs="https://www.bunnings.com.au/our-range/storage-cleaning/cleaning/brushware-mops/indoor-brooms"
    catProd = pd.DataFrame()
    url = adrs
    browser = webdriver.Chrome(options=options, executable_path=chromedriver)
    browser.get(url)

    lenOfPage = browser.execute_script("window.scrollTo(0, document.body.scrollHeight);var lenOfPage=document.body.scrollHeight;return lenOfPage;")
    match = False
    while (match == False):
        lastCount = lenOfPage
        time.sleep(3)
        lenOfPage = browser.execute_script("window.scrollTo(0, document.body.scrollHeight);var lenOfPage=document.body.scrollHeight;return lenOfPage;")
        if lastCount == lenOfPage:
            match = True
    reached= False
    while (reached==False):
        try:
            browser.find_element_by_css_selector('#MoreProductsButton > span').click()
            lenOfPage = browser.execute_script("window.scrollTo(0, document.body.scrollHeight);var lenOfPage=document.body.scrollHeight;return lenOfPage;")
            match = True
            while (match == True):
                lastCount = lenOfPage
                time.sleep(3)
                lenOfPage = browser.execute_script("window.scrollTo(0, document.body.scrollHeight);var lenOfPage=document.body.scrollHeight;return lenOfPage;")
                if lastCount == lenOfPage:
                    match = True
                    browser.find_element_by_css_selector('#content-layout_inside-anchor > div.search-result__content > div > div > section > div:nth-child(4) > div > div:nth-child(2) > div > button > div.view-more_btn_text').click()
        except:
            reached=True
# grab the items
            page = browser.page_source
            soup = BeautifulSoup(page, 'html.parser')
            browser.close()

        for article in soup.findAll('article', attrs={'class':'product-list__item hproduct special-order-product'}):
            for product in article.findAll('img', attrs={'class': 'photo'}):
                pName = product['alt']
                pCat = adrs
                pID = article['data-product-id']
                temp= pd.DataFrame({'proID':[pID],'Product':[pName],'Category':[pCat]})
                catProd=catProd.append(temp)
                result = result.append(catProd)
        time.sleep(3)
        result.head()

#writes to CSV
writer = pd.ExcelWriter('test123123.xlsx')
result.to_excel(writer,'Sheet1')
writer.save()

该代码利用的东西20分钟就好过3000迭代〜项目这是我的意见,但主要的问题还是在于疯狂的是我得到了太多的重复和50万行的时候只有3500,我需要为某些行类别。

python selenium web-scraping
1个回答
0
投票

问题就在这里:

for product in article.findAll('img', attrs={'class': 'photo'}):
                pName = product['alt']
                pCat = adrs
                pID = article['data-product-id']
                temp= pd.DataFrame({'proID':[pID],'Product':[pName],'Category':[pCat]}) #<-------------- temp DataFrame
                catProd=catProd.append(temp) #<------------ temp appending into catProd dataframe
                result = result.append(catProd)  #<----------- catProd appending into result DataFrame

你基本上是做双追加这需要你temp数据帧,以及添加到您的catProd数据框...然后之后追加到您的result数据帧。所以,你的结果数据框呈指数级增长。

还有,你可以解决此一对夫妇的方式。一个是移动你的result = result.append(temp)是外循环,所以catProd追加FULL result充满后catProd。或者,只是消除你catProd一起并保留追加到您的result

还有我清理了几件事情。 IE浏览器。重置数据框的索引,不包括在Excel中写的指数。我也阿迪明确的等待(即等候按钮会显示),而不是time.sleep,应加快步伐一点点。

下面的完整代码。不要忘记更改qazxsw POI,以使其在整个列表。我只是有它经过第一个网址。

最后一件事是我扔在那里的方式来计时。刚刚通过的第一个网址,895种产品上运行,并保存到CSV拿了,for adrs in subcat[0:1]

最后,我不得不注释掉一些东西,像os.chdir东西,这样我就可以运行它。所以,不要忘了取消对的东西。

Duration: 0 Hours, 02 Minutes, 48 Seconds
© www.soinside.com 2019 - 2024. All rights reserved.