无法使用请求或 Selenium 获取 href 链接

问题描述 投票:0回答:2

我的目标是从此页面提取所有 href 链接并找到 .pdf 链接。我尝试使用 requests 库和 Selenium,但它们都无法提取它。

如何解决这个问题?谢谢。

例如:这包含 .pdf 文件链接

这是请求代码:

    import requests
    from bs4 import BeautifulSoup

    headers = {'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/113.0'}
    url="https://www.bain.com/insights/topics/energy-and-natural-resources-report/"
    
    response = requests.get(url, headers=headers)
    soup = BeautifulSoup(response.text, 'html.parser')

    for link in soup.find_all('a'):
        print(link.get('href'))

这是硒代码:

    from selenium import webdriver
    from selenium.webdriver.chrome.service import Service as ChromeService
    from webdriver_manager.chrome import ChromeDriverManager
    from bs4 import BeautifulSoup

    options = webdriver.ChromeOptions()
    driver = webdriver.Chrome(service=ChromeService(ChromeDriverManager().install()), options=options)

    driver.get("https://www.bain.com/insights/topics/energy-and-natural-resource-report/")
    driver.implicitly_wait(10)

    soup = BeautifulSoup(self.page_source, 'html.parser')
    for link in soup.find_all('a'):
        print(link.get('href'))

    driver.quit()
python selenium-webdriver python-requests selenium-chromedriver
2个回答

0
投票
© www.soinside.com 2019 - 2024. All rights reserved.