如何使用beautifulsoup打印href属性,同时通过硒自动化?

问题描述 投票:0回答:3

blue element is what i want to access for web scrapping

蓝色元素的href值是我想从这个HTML访问的

我尝试了几种方法来打印链接,但没有用。

我的代码如下: -

discover_page = BeautifulSoup(r.text, 'html.parser')

finding_accounts = discover_page.find_all("a", class_="author track")
print(len(finding_accounts))

finding_accounts = discover_page.find_all('a[class="author track"]')
print(len(finding_accounts))

accounts = discover_page.select('a', {'class': 'author track'})['href']
print(len(accounts))

Output:- 
0
0
TypeError: 'dict' object is not callable

网页的网址是https://society6.com/discover,但登录到我的帐户后,网址会更改为https://society6.com/society?show=2

我在这做错了什么?

注意: - 我在这里使用selenium chrome浏览器。这里给出的答案适用于我的终端,但不是在我运行文件时

我的完整代码: -

from selenium import webdriver
import time
import requests
from bs4 import BeautifulSoup
import lxml

driver = webdriver.Chrome()
driver.get("https://society6.com/login?done=/")
username = driver.find_element_by_id('email')
username.send_keys("[email protected]")
password = driver.find_element_by_id('password')
password.send_keys("sultan1997")
driver.find_element_by_name('login').click()

time.sleep(5)

driver.find_element_by_link_text('My Society').click()
driver.find_element_by_link_text('Discover').click()

time.sleep(5)

r = requests.get(driver.current_url)
r.raise_for_status()

'''discover_page = BeautifulSoup(r.html.raw_html, 'html.parser')

finding_accounts = discover_page.find_all("a", class_="author track")
print(len(finding_accounts))

finding_accounts = discover_page.find_all('a[class="author track"]')
print(len(finding_accounts))


links = []
for a in discover_page.find_all('a', class_ = 'author track'): 
        links.append(a['href'])
        #links.append(a.get('href'))

print(links)'''

#discover_page.find_all('a')

links = []
for a in discover_page.find_all("a", attrs = {"class": "author track"}): 
        links.append(a['href'])
        #links.append(a.get('href'))

print(links)

#soup.find_all("a", attrs = {"class": "author track"})'''

soup = BeautifulSoup(r.content, "lxml")
a_tags = soup.find_all("a", attrs={"class": "author track"})

for a in soup.find_all('a',{'class':'author track'}):
    print('https://society6.com'+a['href'])

文档中的代码是我正在尝试使用的代码

python selenium web-scraping beautifulsoup webdriverwait
3个回答
0
投票

如果您希望在Beautifulsoup中手动尝试找到所有链接。然后去requests-html

获取所有链接的示例代码,

from requests_html import HTMLSession
from bs4 import BeautifulSoup

url = 'https://society6.com/discover'
session = HTMLSession(mock_browser=True)
r = session.get(url, headers={'User-Agent': 'Mozilla/5.0'})

print(r.html.links)
print(r.html.absolute_links)

soup = BeautifulSoup(r.html.raw_html, 'html.parser')
a_tags = soup.find_all("a", attrs={"class": "author track"})
for a_tag in a_tags:
    print(a_tag['href'])

0
投票
import requests
from bs4 import BeautifulSoup

data = requests.get('https://society6.com/discover')
soup_data = BeautifulSoup(data.content, "lxml")

for a in soup_data.find_all('a',{'class':'author track'}):
    print('https://society6.com'+a['href'])

0
投票

根据您从所需元素打印href的问题,您可以使用以下解决方案仅使用Selenium:

  • 代码块: from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC options = Options() options.add_argument("start-maximized") options.add_argument("disable-infobars") options.add_argument("--disable-extensions") options.add_argument("--disable-gpu") options.add_argument("--no-sandbox") driver = webdriver.Chrome(chrome_options=options, executable_path=r'C:\WebDrivers\ChromeDriver\chromedriver_win32\chromedriver.exe') driver.get("https://society6.com/login?done=/") WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "input#email"))).send_keys("[email protected]") driver.find_element_by_css_selector("input#password").send_keys("sultan1997") driver.find_element_by_css_selector("button[name='login']").click() WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "a#nav-user-my-society>span"))).click() WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.LINK_TEXT, "Discover"))).click() hrefs_elements = WebDriverWait(driver, 20).until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR, "a.author.track"))) for element in hrefs_elements: print(element.get_attribute("href"))
  • 控制台输出: https://society6.com/pivivikstrm https://society6.com/cafelab https://society6.com/cafelab https://society6.com/colorandcolor https://society6.com/83oranges https://society6.com/aftrdrk https://society6.com/alaskanmommabear https://society6.com/thindesign https://society6.com/colorandcolor https://society6.com/aftrdrk https://society6.com/aljahorvat https://society6.com/bribuckley https://society6.com/hennkim https://society6.com/franciscomffonseca https://society6.com/83oranges https://society6.com/nadja1 https://society6.com/beeple https://society6.com/absentisdesigns https://society6.com/alexandratarasoff https://society6.com/artdekay880 https://society6.com/annaki https://society6.com/cafelab https://society6.com/bribuckley https://society6.com/bitart https://society6.com/draw4you https://society6.com/cafelab https://society6.com/beeple https://society6.com/burcukorkmazyurek https://society6.com/absentisdesigns https://society6.com/deanng https://society6.com/beautifulhomes https://society6.com/aftrdrk https://society6.com/printsproject https://society6.com/bluelela https://society6.com/anipani https://society6.com/ecmazur https://society6.com/batkei https://society6.com/menchulica https://society6.com/83oranges https://society6.com/7115
© www.soinside.com 2019 - 2024. All rights reserved.