Python / Selenium web Scraping JS表,不使用JSON数据

问题描述 投票:0回答:1

我想把桌子刮掉:

https://www2.sgx.com/securities/annual-reports-financial-statements

我理解这可以通过研究标题并找到API调用,例如:https://api.sgx.com/financialreports/v1.0?pagestart=3&pagesize=250&params=id,companyName,documentDate,securityName,title,url,但我想知道是否有可能从表中获取所有数据而不这样做,因为我需要解析16个JSON文件。

当试图用Selenium刮擦时,我只能到达可见表的末尾(当左侧单击“全部清除”时,表格会变得更大,这是我需要的所有数据)。

欢迎任何想法!

编辑:这是代码,它只返回表中数千个单元格中的144个单元格

from time import sleep  # to wait for stuff to finish.
from selenium import webdriver  # to interact with our site.
from selenium.common.exceptions import WebDriverException  #  url is wrong
from webdriver_manager import chrome  # to install and find the chromedriver executable


BASE_URL = 'https://www2.sgx.com/securities/annual-reports-financial-statements'
driver = webdriver.Chrome(executable_path=chrome.ChromeDriverManager().install())
driver.maximize_window()

try:
    driver.get(BASE_URL)
except WebDriverException:
    print("Url given is not working, please try again.")
    exit()

# clicking away pop-up
sleep(5)
header = driver.find_element_by_id("website-header")
driver.execute_script("arguments[0].click();", header)

# clicking the clear all button, to clear the calendar
sleep(2)
clear_field = driver.find_element_by_xpath('/html/body/div[1]/main/div[1]/article/template-base/div/div/sgx-widgets-wrapper/widget-filter-listing/widget-filter-listing-financial-reports/section[2]/div[1]/sgx-filter/sgx-form/div[2]/span[2]')
clear_field.click()

# clicking to select only Annual Reports
sleep(2)
driver.find_element_by_xpath("/html/body/div[1]/main/div[1]/article/template-base/div/div/sgx-widgets-wrapper/widget-filter-listing/widget-filter-listing-financial-reports/section[2]/div[1]/sgx-filter/sgx-form/div[1]/div[1]/sgx-input-select/label/span[2]/input").click()
sleep(1)
driver.find_element_by_xpath("//span[text()='Annual Report']").click()

rows = driver.find_elements_by_class_name("sgx-table-cell")
print(len(rows))
python ajax selenium-webdriver
1个回答
0
投票

我知道您已经要求不使用API​​。我认为使用它是更清洁的方法。

(输出是3709文件)

import requests

URL_TEMPLATE = 'https://api.sgx.com/financialreports/v1.0?pagestart={}&pagesize=250&params=id%2CcompanyName%2CdocumentDate%2CsecurityName%2Ctitle%2Curl'

NUM_OF_PAGES = 16
data = []
for page_num in range(1, NUM_OF_PAGES):
    r = requests.get(URL_TEMPLATE.format(page_num))
    if r.status_code == 200:
        data.extend(r.json()['data'])
print('we have {} documents'.format(len(data)))
for doc in data:
    print(doc)
© www.soinside.com 2019 - 2024. All rights reserved.