读取网站上的链接并将它们存储在列表中

问题描述 投票:0回答:1

我正在尝试从StatsCan中读取数据的网址,如下所示:


# 2015
url <- "https://www.nrcan.gc.ca/our-natural-resources/energy-sources-distribution/clean-fossil-fuels/crude-oil/oil-pricing/crude-oil-prices-2015/18122"

x1 <- read_html(url) %>% 
  html_nodes(xpath = '//*[@class="col-md-4"]/ul/li/ul/li/a') %>% 
  html_attr("href")


# 2014
url2 <- "https://www.nrcan.gc.ca/our-natural-resources/energy-sources-distribution/clean-fossil-fuels/crude-oil/oil-pricing/crude-oil-prices-2014/16993"

x2 <- read_html(url) %>% 
  html_nodes(xpath = '//*[@class="col-md-4"]/ul/li/ul/li/a') %>% 
  html_attr("href")

这样做会返回两个空列表;我很困惑,因为这适用于此链接:https://www.nrcan.gc.ca/our-natural-resources/energy-sources-distribution/clean-fossil-fuels/crude-oil/oil-pricing/18087。最终,我想遍历该列表并按如下方式读取每页上的表格:

for (i in 1:length(x2)){
  out.data <- read_html(x2[i]) %>% 
    html_table(fill = TRUE) %>% 
    `[[`(1) %>% 
    as_tibble()
  write.xlsx(out.data, str_c(destination,i,".xlsx"))
}
html r xpath web-scraping rvest
1个回答
0
投票

为了提取所有URL,我建议根据模式使用css选择器“ .field-item li a”和子集。

links <- read_html(url) %>% 
    html_nodes(".field-item li a") %>% 
    html_attr("href") %>% 
    str_subset("fuel-prices/crude")
© www.soinside.com 2019 - 2024. All rights reserved.