如何通过xpath或css选择器在一组类上循环。

问题描述 投票:0回答:1

我想把这个网站上的元素循环起来。https:/www.dccomics.comcomics

网页底部有一个浏览漫画的栏目,我想把每本漫画的名字刮出来。

这是我目前的代码

# imports
from selenium import webdriver
from bs4 import BeautifulSoup 
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By


# website urls
base_url = "https://www.dccomics.com/"
comics_url = "https://www.dccomics.com/comics"

# Chrome session
driver = webdriver.Chrome("C:\\laragon\\www\\Proftaak\\chromedriver.exe")
driver.get(comics_url)
driver.implicitly_wait(500)


cookies = driver.find_element_by_xpath('/html/body/div[1]/div[2]/div[4]/div[2]/div/button')
driver.execute_script("arguments[0].click();", cookies)
driver.implicitly_wait(100)
clear_filter = driver.find_element_by_class_name('clear-all-action')
driver.execute_script("arguments[0].click();", clear_filter)

array = []
for titles in driver.find_elements_by_class_name('result-title'):
title = titles.find_element_by_xpath('/html/body/div[2]/section/section/div[2]/div/div/div/div/div[3]/div[7]/div[2]/div/div/div/div/div[3]/div[3]/div[2]/div[1]/a/p[1]').text


    array.append({'title': title,})
    print(array)
driver.quit()

我使用的是下面的xpath。

/html/body/div[2]/section/section/div[2]/div/div/div/div/div[3]/div[7]/div[2]/div/div/div/div/div[3]/div[3]/div[2]/div[1]/a/p[1] 

这个方法有效,但只能得到结果标题CSS类的第一个元素,本例中是818。

我如何使用CSS选择器或Xpath循环浏览每个result-title类?

python selenium xpath css-selectors webdriverwait
1个回答
1
投票

要从每本漫画中刮出名字,使用的是什么? 你要诱导 WebDriverWait 对于 visibility_of_all_elements_located() 您可以使用以下任何一种方式 定位策略:

  • 使用 CSS_SELECTOR:

    driver.get('https://www.dccomics.com/comics')
    print([my_elem.text for my_elem in WebDriverWait(driver, 5).until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR, "div.browse-result>a p:not(.result-date)")))])
    
  • 使用 XPATH:

    driver.get('https://www.dccomics.com/comics')
    print([my_elem.text for my_elem in WebDriverWait(driver, 5).until(EC.visibility_of_all_elements_located((By.XPATH, "//div[contains(@class, 'browse-result')]/a//p[not(contains(@class, 'result-date'))]")))])
    
  • 控制台输出。

    ['PRIMER', 'DOOMSDAY CLOCK PART 2', 'CATWOMAN #22', 'YOU BROUGHT ME THE OCEAN', 'ACTION COMICS #1022', 'BATMAN/SUPERMAN #9', 'BATMAN: GOTHAM NIGHTS #7', 'BATMAN: THE ADVENTURES CONTINUE #5', 'BIRDS OF PREY #1', 'CATWOMAN 80TH ANNIVERSARY 100-PAGE SUPER SPECTACULAR #1', 'DC GOES TO WAR', "DCEASED: HOPE AT WORLD'S END #2", 'DETECTIVE COMICS #1022', 'FAR SECTOR #6', "HARLEY QUINN: MAKE 'EM LAUGH #1", 'HOUSE OF WHISPERS #21', 'JOHN CONSTANTINE: HELLBLAZER #6', 'JUSTICE LEAGUE DARK #22', 'MARTIAN MANHUNTER: IDENTITY', 'SCOOBY-DOO, WHERE ARE YOU? #104', 'SHAZAM! #12', 'TEEN TITANS GO! TO CAMP #15', 'THE JOKER: 80 YEARS OF THE CLOWN PRINCE OF CRIME THE DELUXE EDITION', 'THE LAST GOD: TALES FROM THE BOOK OF AGES #1', 'THE TERRIFICS VOL. 3: THE GOD GAME']
    
  • 说明: : 你必须添加以下进口。

    from selenium.webdriver.support.ui import WebDriverWait
    from selenium.webdriver.common.by import By
    from selenium.webdriver.support import expected_conditions as EC
    
© www.soinside.com 2019 - 2024. All rights reserved.