Selenium中的多线程/多处理

问题描述 投票:0回答:1

我编写了一个python脚本,该脚本从文本文件中抓取网址,并从元素中打印出href。但是,我的目标是提高通过多处理或多线程进行大规模处理的速度。

[在工作流程中,每个浏览器进程将从当前URL获取href并以相同的浏览器距离(例如有5个)从que加载下一个链接。当然,每个链接应该刮掉1次。

示例输入文件HNlinks.txt

https://news.ycombinator.com/user?id=ingve
https://news.ycombinator.com/user?id=dehrmann
https://news.ycombinator.com/user?id=thanhhaimai
https://news.ycombinator.com/user?id=rbanffy
https://news.ycombinator.com/user?id=raidicy
https://news.ycombinator.com/user?id=svenfaw
https://news.ycombinator.com/user?id=ricardomcgowan

代码:

from selenium import webdriver

driver = webdriver.Chrome()
input1 = open("HNlinks.txt", "r")
urls1 = input1.readlines()

for url in urls1:
    driver.get(url)

    links=driver.find_elements_by_class_name('athing')
    for link in links:
        print(link.find_element_by_css_selector('a').get_attribute("href"))
python multithreading selenium multiprocessing python-multithreading
1个回答
0
投票

使用多重处理*

注意:我尚未在本地测试此答案。请尝试并提供反馈:

from multiprocessing import Pool
from selenium import webdriver

input1 = open("HNlinks.txt", "r")
urls1 = input1.readlines()

def load_url(url):
    driver = webdriver.Chrome()
    driver.get(url)
    links=driver.find_elements_by_class_name('athing')
    for link in links:
        print(link.find_element_by_css_selector('a').get_attribute("href"))

if __name__ == "__main__":
    # how many concurrent processes do you want to span? this is also limited by 
    the number of cores that your computer has.
    processes = len(urls1)
    p = Pool(processes ) 
    p.map(load_url, urls1)
    p.close()
    p.join()
© www.soinside.com 2019 - 2024. All rights reserved.