我想通过网络抓取所有交易过股票或其他金融工具的美国政客的名字。我为此使用的网站的 URL 是“https://www.capitoltrades.com/trades”
我设置了 URL 路径,抓取了整个页面,找到了我感兴趣的 html 元素的 xpath,并设法从第一页获取结果。这一切都很好。
重要 在这篇文章中,我将编写不带“https://www”的链接。一开始,因为 StackOverflow 不会让我发布它,否则
link = "https://www.capitoltrades.com/trades"
page = read_html(link)
names_path = '//*[@id="__next"]/div/main/div/article/section/div[2]/div[1]/table/tbody/tr/td[1]/div/div/h3/a'
name = page %>% html_elements(xpath = names_path) %>% html_text()
然而,当我尝试从该网站的第二个(以及随后的每个)页面中抓取数据时,问题就出现了。当我在浏览器中浏览页面时,URL 更改为“https://www.capitoltrades.com/trades?page=NNN”,其中 NNN 代表我所在的页面编号。为了抓取所有这些数据,我设置了一个 for 循环来迭代所有这些地址,抓取每个地址并将临时结果添加到主要结果中:
for (i in 2:n_pages){
#new link each iteration
link_temp = paste("https://www.capitoltrades.com/trades?page=", i, sep = "")
page_temp = read_html(link_temp)
name_temp = page_temp %>% html_elements(xpath = names_path) %>% html_text()
name = c(name, name_temp)
}
问题是,在每次迭代中,即使我在每次迭代中更改 URL 并(尝试)访问不同的页面,read_html(link_temp) 抓取的网站仍然是原始网站:“capitoltrades.com/trades”。本质上,每次迭代都会输出相同的精确向量......
我尝试过一些事情:
我已经彻底检查了我的变量是否混淆了(没有)。
我已经清理了环境,从头开始多次启动整个脚本(仍然不起作用,所以我知道问题不在于混合变量)
我在另一个文件中打开了一个全新的项目,我只尝试抓取第 10 页:“capitoltrades.com/trades?page=10”(它仍然给我第一页的结果)
我复制了链接“https://www.capitoltrades.com/trades?page=10”并将其粘贴到我的浏览器中,我的浏览器直接将我带到了第 10 页 - 这意味着该链接是好的
我按照 ChatGPT 的建议使用了自定义用户代理,但它仍然不起作用:
link <- "https://www.capitoltrades.com/trades?page=10"
headers <- c('User-Agent' = 'Mozilla/5.0')
page <- read_html(link, httr::add_headers(.headers=headers))
library(httr)
session <- html_session("https://www.capitoltrades.com/trades?page=10")
page <- session %>% read_html()
总之,这些策略都没有解决我的问题。我多次尝试了所有这些方法,并以不同的组合进行了尝试。我不断地从第一页得到结果。根据我的彻底调查,我得出的结论是问题出在 read_html() 函数本身。它似乎总是转到默认的第一页,无论我作为输入提供的链接指定它应该转到第二页、第三页、第四页等...页面
以下适用于第 2 至 4 页。
它的开头几乎与您的代码类似,但链接中带有
"https://www."
。lapply
循环浏览第 2 至 4 页,用 tryCatch
模仿代码捕获错误。完成后,检查哪些页面出现错误并保留其他页面。
suppressPackageStartupMessages({
library(dplyr)
library(rvest)
})
link <- "https://www.capitoltrades.com/trades"
page <- read_html(link)
names_path <- '//*[@id="__next"]/div/main/div/article/section/div[2]/div[1]/table/tbody/tr/td[1]/div/div/h3/a'
name_pg1 <- page %>% html_elements(xpath = names_path) %>% html_text()
n_pages <- 3358
page_num <- seq.int(n_pages)[-1L]
names_list <- lapply(page_num[1:3], \(i) {
lnk <- sprintf("https://www.capitoltrades.com/trades?page=%d", i)
tryCatch(
read_html(lnk) %>%
html_elements(xpath = names_path) %>%
html_text(),
error = function(e) e
)
})
err <- sapply(names_list, inherits, "error")
names_vec <- names_list[!err] %>% unlist()
names_vec <- c(name_pg1, names_vec)
names_vec
#> [1] "Rudy Yakym" "Jared Moskowitz" "Jared Moskowitz" "Jared Moskowitz"
#> [5] "Jared Moskowitz" "Kevin Hern" "Kevin Hern" "Kevin Hern"
#> [9] "Kevin Hern" "Kevin Hern" "Kevin Hern" "Kevin Hern"
#> [13] "Rudy Yakym" "Jared Moskowitz" "Jared Moskowitz" "Jared Moskowitz"
#> [17] "Jared Moskowitz" "Kevin Hern" "Kevin Hern" "Kevin Hern"
#> [21] "Kevin Hern" "Kevin Hern" "Kevin Hern" "Kevin Hern"
#> [25] "Rudy Yakym" "Jared Moskowitz" "Jared Moskowitz" "Jared Moskowitz"
#> [29] "Jared Moskowitz" "Kevin Hern" "Kevin Hern" "Kevin Hern"
#> [33] "Kevin Hern" "Kevin Hern" "Kevin Hern" "Kevin Hern"
#> [37] "Rudy Yakym" "Jared Moskowitz" "Jared Moskowitz" "Jared Moskowitz"
#> [41] "Jared Moskowitz" "Kevin Hern" "Kevin Hern" "Kevin Hern"
#> [45] "Kevin Hern" "Kevin Hern" "Kevin Hern" "Kevin Hern"
创建于 2023-10-15,使用 reprex v2.0.2
记录源自内部 API 端点
bff.capitoltrades.com/trades
,显然每页最多返回 100 个结果:
library(stringr)
library(jsonlite)
library(purrr)
trades_url <- "https://bff.capitoltrades.com/trades?per_page=100&page={page_n}&pageSize=100"
# get 1st page to extract pagination details
page_1 <- str_glue(trades_url, page_n = 1) |> fromJSON()
str(page_1$meta$paging)
#> List of 4
#> $ page : int 1
#> $ size : int 100
#> $ totalPages: int 403
#> $ totalItems: int 40292
# limit requests to 3 first pages,
# trades_url includes "{page_n}"
map(2:3, \(page_n) str_glue(trades_url)) |>
# slowly -- limit request rate
map(slowly(fromJSON)) |>
# insert 1st page to 1st position
append(list(page_1), after = 0) |>
# extract $data$politician frame from every list item
map(list("data", "politician")) |>
# bind frames
list_rbind() |>
# reduce ouput by keeping only unique rows
unique()
#> _stateId chamber dob firstName gender lastName nickname party
#> 1 in house 1984-02-24 Rudolph male Yakym Rudy republican
#> 2 fl house 1980-12-18 Jared male Moskowitz <NA> democrat
#> 6 ok house 1961-12-04 Kevin male Hern <NA> republican
#> 25 fl house 1964-08-23 Clifford male Franklin Scott republican
#> 49 ok senate 1977-07-26 Markwayne male Mullin <NA> republican
#> 118 wa house 1965-06-15 Richard male Larsen Rick democrat
#> 119 nv house 1966-11-07 Suzanne female Lee Susie democrat
#> 120 fl house 1948-05-16 Lois female Frankel <NA> democrat
#> 223 pa house 1948-05-10 George male Kelly Mike republican
#> 224 wa house 1962-02-17 Suzan female DelBene <NA> democrat
#> 226 oh house 1988-11-13 Max male Miller <NA> republican
#> 230 ca house 1976-09-13 Rohit male Khanna Ro democrat
创建于 2023-10-15,使用 reprex v2.0.2