rvest - 导航站点并下载加拿大水文数据

问题描述 投票:0回答:2

我正在创建一个 R 函数,它获取一个站号,导航加拿大水文,并下载该站的所有数据。我遇到了一些问题,它们可能是由于单选按钮和/或搜索按钮未命名所致。这就是我所拥有的:

station_number <- "08NM083"
url <- "https://wateroffice.ec.gc.ca/search/historical_e.html"
user_a <- httr::user_agent("Mozilla/5.0 (Macintosh; Intel Mac OS X 12_0_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36")

my_session <- session(url, user_a)

form <- html_form(my_session)[[2]]

给出:

<form> 'search-form' (GET https://wateroffice.ec.gc.ca/search/historical_results_e.html)
  <field> (submit) : Search
  <field> (radio) search_type: station_name
  <field> (text) station_name: 
  <field> (radio) search_type: station_number
  <field> (text) station_number: 
  <field> (radio) search_type: province
  <field> (select) province: AB
  <field> (radio) search_type: basin
  <field> (select) basin: 
  <field> (radio) search_type: region
  <field> (select) region: ATL
  <field> (radio) search_type: coordinate
  <field> (number) north_degrees: 
  <field> (number) north_minutes: 
  <field> (number) north_seconds: 
  <field> (number) south_degrees: 
  <field> (number) south_minutes: 
  <field> (number) south_seconds: 
  <field> (number) east_degrees: 
  <field> (number) east_minutes: 
  <field> (number) east_seconds: 
  <field> (number) west_degrees: 
  <field> (number) west_minutes: 
  <field> (number) west_seconds: 
  <field> (select) parameter_type: all
  <field> (number) start_year: 1850
  <field> (number) end_year: 2023
  <field> (number) minimum_years: 
  <field> (checkbox) latest_year: Y
  <field> (select) regulation: all
  <field> (select) station_status: all
  <field> (select) operation_schedule: 
  <field> (select) contributing_agency: all
  <field> (select) gross_drainage_operator: >
  <field> (number) gross_drainage_area: 
  <field> (select) effective_drainage_operator: >
  <field> (number) effective_drainage_area: 
  <field> (select) sediment: ---
  <field> (select) real_time: ---
  <field> (select) rhbn: ---
  <field> (select) contributed: ---
  <field> (submit) : Search

然而,当我填写表格并提交时,似乎什么都没有改变。

filled <- form %>% 
  html_form_set(station_number = station_number, 
                search_type = "station_number")

resp <- session_submit(x = my_session, form = filled)

my_session
resp

> my_session
<session> https://wateroffice.ec.gc.ca/search/historical_e.html
  Status: 200
  Type:   text/html; charset=UTF-8
  Size:   45034
> resp
<session> https://wateroffice.ec.gc.ca/search/historical_e.html
  Status: 200
  Type:   text/html; charset=UTF-8
  Size:   45284

欢迎任何建议!

编辑

kaliiiiiiiii 关于将站号粘贴到 url 中的建议对我的这部分问题非常有效!我仍然不知道如何下载 csv 文件。

当前代码:

station_number <- "08NM083"
url <- paste0("https://wateroffice.ec.gc.ca/search/historical_results_e.html?search_type=station_number&station_number=", 
              station_number, 
              "&start_year=1850&end_year=2023&minimum_years=&gross_drainage_operator=%3E&gross_drainage_area=&effective_drainage_operator=%3E&effective_drainage_area=")
user_a <- httr::user_agent("Mozilla/5.0 (Macintosh; Intel Mac OS X 12_0_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36")

my_session <- session(url, user_a)

form <- html_form(my_session)[[2]]

filled <- form %>% 
  html_form_set(check_all = "all")

resp <- session_submit(x = my_session, form = filled, submit = "download")
resp

link <- resp %>% 
  read_html() %>% 
  html_element("p+ section .col-lg-4:nth-child(1) a") %>% 
  html_attr("href")

full_link <- url_absolute(link, url)

我尝试下载文件:

download.file(full_link, destfile = "Downloads/test_hydat.csv")
test <- read_csv(full_link)

以上两个只包含html代码。

r web-scraping rvest
2个回答
0
投票

为什么不直接使用 api:

curl 'https://wateroffice.ec.gc.ca/services/map_data?data_type=historical' \
  -H 'Accept: */*' \
  -H 'Accept-Language: de,de-DE;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6,fr;q=0.5,de-CH;q=0.4,es;q=0.3' \
  -H 'Cache-Control: no-cache' \
  -H 'Connection: keep-alive' \
  -H 'DNT: 1' \
  -H 'Pragma: no-cache' \
  -H 'Referer: https://wateroffice.ec.gc.ca/map/index_e.html?type=historical' \
  -H 'Sec-Fetch-Dest: empty' \
  -H 'Sec-Fetch-Mode: cors' \
  -H 'Sec-Fetch-Site: same-origin' \
  -H 'User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36 Edg/111.0.1661.44' \
  -H 'X-Requested-With: XMLHttpRequest' \
  -H 'sec-ch-ua: "Microsoft Edge";v="111", "Not(A:Brand";v="8", "Chromium";v="111"' \
  -H 'sec-ch-ua-mobile: ?0' \
  -H 'sec-ch-ua-platform: "Windows"' \
  --compressed

要获得所有电台?

对于其他编程语言,用curlconverter

转换

或者您可以直接使用网址搜索:

station_name = "teststation
url = "https://wateroffice.ec.gc.ca/search/historical_results_e.html?search_type=station_name&station_name="+station_name+"&start_year=1850&end_year=2023&minimum_years=&gross_drainage_operator=%3E&gross_drainage_area=&effective_drainage_operator=%3E&effective_drainage_area="

0
投票

想通了!我需要跳转到“下载 csv”链接并专门提取新会话的响应内容。下面的完整代码适用于任何需要做类似事情的人:

station_number <- "08NM083"
url <- paste0("https://wateroffice.ec.gc.ca/search/historical_results_e.html?search_type=station_number&station_number=", 
              station_number, 
              "&start_year=1850&end_year=2023&minimum_years=&gross_drainage_operator=%3E&gross_drainage_area=&effective_drainage_operator=%3E&effective_drainage_area=")
user_a <- httr::user_agent("Mozilla/5.0 (Macintosh; Intel Mac OS X 12_0_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36")

my_session <- session(url, user_a)

form <- html_form(my_session)[[2]]

filled <- form %>% 
  html_form_set(check_all = "all")

resp <- session_submit(x = my_session, form = filled, submit = "download")

link <- resp %>% 
  read_html() %>% 
  html_element("p+ section .col-lg-4:nth-child(1) a") %>% 
  html_attr("href")

full_link <- url_absolute(link, url)

next_ses <- my_session %>% 
  session_jump_to(full_link)

writeBin(next_ses$response$content, "Downloads/test_hydat.csv")
© www.soinside.com 2019 - 2024. All rights reserved.