如何与XPath寻找元素,点击它后刮网页

问题描述 投票:-1回答:1
from selenium import webdriver

from bs4 import BeautifulSoup
driver = webdriver.Chrome(r"C:\Users\Matang\Desktop\chromedriver_win32 (1)\chromedriver.exe")
driver.get("https://turo.com/search?airportCode=EWR&customDelivery=true&defaultZoomLevel=11&endDate=04%2F05%2F2019&endTime=11%3A00&international=true&isMapSearch=false&itemsPerPage=200&location=EWR&locationType=Airport&maximumDistanceInMiles=30&sortType=RELEVANCE&startDate=03%2F05%2F2019&startTime=10%3A00")
driver.find_element_by_xpath("""//*[@id="pageContainer-content"]/div[4]/div/div[1]/div[2]/div[1]/div/div/div[1]/div/div[1]/div/div/a""").click()

我想上面的XPath的页面信息,并提取它,我总是得到URL的信息,请帮助别人

网址是https://turo.com/search?airportCode=EWR&customDelivery=true&defaultZoomLevel=11&endDate=04%2F05%2F2019&endTime=11%3A00&international=true&isMapSearch=false&itemsPerPage=200&location=EWR&locationType=Airport&maximumDistanceInMiles=30&sortType=RELEVANCE&startDate=03%2F05%2F2019&startTime=10%3A00

python-3.x selenium-webdriver beautifulsoup selenium-chromedriver
1个回答
0
投票

你点击之后,它只是一霎那的HTML源代码,然后解析的问题。你可以做到这一点与硒,或者我更喜欢BeautifulSoup只是因为我更熟悉它。所以,你会在这里把代码:

from selenium import webdriver


from bs4 import BeautifulSoup
driver = webdriver.Chrome(r"C:\Users\Matang\Desktop\chromedriver_win32 (1)\chromedriver.exe")
driver.get("https://turo.com/search?airportCode=EWR&customDelivery=true&defaultZoomLevel=11&endDate=04%2F05%2F2019&endTime=11%3A00&international=true&isMapSearch=false&itemsPerPage=200&location=EWR&locationType=Airport&maximumDistanceInMiles=30&sortType=RELEVANCE&startDate=03%2F05%2F2019&startTime=10%3A00")
driver.find_element_by_xpath("""//*[@id="pageContainer-content"]/div[4]/div/div[1]/div[2]/div[1]/div/div/div[1]/div/div[1]/div/div/a""").click()

soup = BeautifulSoup(driver.page_source, 'html.parser')

# Start finding and grabbing the tags and elements in `soup`
最新问题
© www.soinside.com 2019 - 2024. All rights reserved.