通过RVest刮取数据

问题描述 投票:0回答:1

我想按类别获取文章名称,从 https:/www.inquirer.netarticle-index?d=2020-6-13

我试图通过这样做来读取文章名称。

library('rvest')

year <- 2020
month <- 06
day <- 13
url <- paste('http://www.inquirer.net/article-index?d=', year, '-', month, '-',day, sep = "")

 pg <- read_html(url)

 test<-pg %>%
  html_nodes("#index-wrap") %>%
  html_text()

这只返回一个包含所有文章名称的字符串,而且非常混乱。

我最终希望有一个像下面这样的数据框架。

       Date     Category      Article Name
 2020-06-13         News      ‘We can never let our guard down’ vs terrorism – Cayetano
 2020-06-13         News      PNP spox says mañanita remark did not intend to put Sinas in bad light
 2020-06-13         News      After stranded mom’s death, Pasay LGU helps over 400 stranded individuals
 2020-06-13        World      4 dead after tanker truck explodes on highway in China
 etc.
 etc.
 etc.
 etc.
 2020-06-13    Lifestyle     Book: Melania Trump delayed 2017 move to DC to get new prenup

有谁知道我遗漏了什么吗?我是个新手,谢谢

r rvest
1个回答
2
投票

这也许是你能得到的最接近的。

library(rvest)
#> Loading required package: xml2
library(tibble)

year  <- 2020
month <- 06
day   <- 13
url   <- paste0('http://www.inquirer.net/article-index?d=', year, '-', month, '-', day)

div       <- read_html(url) %>% html_node(xpath = '//*[@id ="index-wrap"]')
links     <- html_nodes(div, xpath = '//a[@rel = "bookmark"]') 
post_date <- html_nodes(div, xpath = '//span[@class = "index-postdate"]') %>% 
             html_text()

test <- tibble(date = post_date,
               text = html_text(links),
               link = html_attr(links, "href"))

test
#> # A tibble: 261 x 3
#>    date     text                              link                              
#>    <chr>    <chr>                             <chr>                             
#>  1 1 day a~ ‘We can never let our guard down~ https://newsinfo.inquirer.net/129~
#>  2 1 day a~ PNP spox says mañanita remark di~ https://newsinfo.inquirer.net/129~
#>  3 1 day a~ After stranded mom’s death, Pasa~ https://newsinfo.inquirer.net/129~
#>  4 1 day a~ Putting up lining for bike lanes~ https://newsinfo.inquirer.net/129~
#>  5 1 day a~ PH Army provides accommodation f~ https://newsinfo.inquirer.net/129~
#>  6 1 day a~ DA: Local poultry production suf~ https://newsinfo.inquirer.net/129~
#>  7 1 day a~ IATF assessing proposed design t~ https://newsinfo.inquirer.net/129~
#>  8 1 day a~ PCSO lost ‘most likely’ P13B dur~ https://newsinfo.inquirer.net/129~
#>  9 2 days ~ DOH: No IATF recommendations yet~ https://newsinfo.inquirer.net/129~
#> 10 2 days ~ PH coronavirus cases exceed 25,0~ https://newsinfo.inquirer.net/129~
#> # ... with 251 more rows

创建于2020-06-14 重读包 (v0.3.0)


2
投票

如果你有read_html(),那么你可以在dplyr声明中使用它。

library('rvest')

year <- 2020
month <- 06
day <- 13
url <- paste('http://www.inquirer.net/article-index?d=', year, '-', month, '-',day, sep = "")

#added page
page <- read_html(url)

test <- page %>%
  #changed xpath
  html_node(xpath = '//*[@id ="index-wrap"]') %>%
  html_text()

test

更新,我在dplyr很烂,但这是我睡觉前有的。

library('rvest')

year <- 2020
month <- 06
day <- 13
url <- paste('http://www.inquirer.net/article-index?d=', year, '-', month, '-',day, sep = "")

#addad page
page <- read_html(url)

titles <- page %>%
  html_nodes(xpath = '//*[@id ="index-wrap"]/h4') %>%
  html_text()

sections <- page %>%
  html_nodes(xpath = '//*[@id ="index-wrap"]/ul')


stories <- sections %>%
  html_nodes(xpath = '//li/a') %>%
  html_text()

stories
© www.soinside.com 2019 - 2024. All rights reserved.