在R-loop网页抓取中尝试Catch

问题描述 投票:0回答:1

我正在使用RSelenium试图通过webscrape几个URL来提取多年来的价格信息。我的问题是有些URL可能不存在(我用我需要信息的年份生成了URL),我需要跳过它并不停地转到下一个URL。

我认为tryCatch()会有所帮助,但我不确切知道如何使用它:

base = "https://www.cochrane.org"
codes_test = list("03040300")
month_ = c("01", "02", "03", "04","05","06","07","08", "09", "10", "11","12")
year_ = c(2008:2019)
html <- apply(expand.grid(base, codes_test, month_, year_), 
              MARGIN = 1, 
              FUN = function(x)paste(x, collapse = "/"))


remDr$navigate("https://www.cochrane.org/0304070017/10/2017")
webElement <- remDr$findElement(value = '//*[@id="acessoAutomatico"]/a')
webElement$clickElement() 

l <-length(html) 

for(j in seq(html)){ 
  sigtap <- foreach(i=1:l) %dopar% {

    tryCatch(stop("no"), error = function(e) cat("Error: ",e$message, "\n")) 
    remDr$navigate(html[i])

    names <- remDr$findElements(value = ' //*[@id="content"]/fieldset[4]/fieldset/table/tbody/tr[2]/td[1]/label | //*[@id="content"]/fieldset[4]/fieldset/table/tbody/tr[1]/td[3]/label | //*[@id="content"]/fieldset[4]/fieldset/table/tbody/tr[2]/td[3]/label | //*[@id="content"]/fieldset[4]/fieldset/table/tbody/tr[3]/td[3]/label ' )

    infos <- remDr$findElements(value = '[@id="valorSA_Total"] | //*[@id="valorSH"] | //*[@id="valorSP"] | //*[@id="totalInternacao"]')

  identificadores <- unlist(lapply(names, function(x) {x$getElementText()}))
  informacoes <- unlist(lapply(infos, function(x) {x$getElementText()}))
  bind_test[[i]] <- data.frame(identificadores , informacoes)

      }}

write.csv(bind_test[[i]], file = paste(bind_test, '.csv', sep = '_'))

谢谢大家的帮助!

r loops web-scraping try-catch rselenium
1个回答
1
投票

假设remDr$navigate(html[i])会引发你想要捕获的错误,请尝试如下:

 success <- tryCatch({
   remDr$navigate(html[i])
   TRUE
   }, 
   warning = function(w) { FALSE },
   error = function(e) { FALSE },
   finally = { })

if (!success) next
© www.soinside.com 2019 - 2024. All rights reserved.