getFinancials(quantmod)和tq_get(整齐定量)不起作用?

问题描述 投票:2回答:2

我在财务数据的quantmod和tinyquant中都得到了同样的错误。任何人都可以看到这是否可重复?这是谷歌财务服务器问题吗?以下所有功能都不适用于我。我不确定它是我还是服务器。

    tq_get("AAPL", get= "financials")
    [1] NA
    Warning message:
    x = 'AAPL', get = 'financials': Error in thead[x]:thead[x + 1]: NA/NaN 
    argument

和:

    getFin("AAPL")
    Error in thead[x]:thead[x + 1] : NA/NaN argument

有人可以帮忙吗?

r finance quantmod stock tidyquant
2个回答
-1
投票

是的,我在过去几天也遇到了同样的问题。我认为这可能与Google财经部门的变更有关。该网站现在不同,也是网址。


3
投票

嗨@Joe我遇到了同样的问题,因为谷歌改变了它的页面,所以我写了一个函数来从雅虎财经获取数据。它的输出类似于getFin。我希望它可以帮到你。

scrapy_stocks <- function(stock){
    if ("rvest" %in% installed.packages()) {
            library(rvest)
    }else{
            install.packages("rvest")
            library(rvest)
    }
    for (i in 1:length(stock)) {
            tryCatch(
                    {
                            url <- "https://finance.yahoo.com/quote/"
                            url <- paste0(url,stock[i],"/financials?p=",stock[i])
                            wahis.session <- html_session(url)                                
                            p <-    wahis.session %>%
                                    html_nodes(xpath = '//*[@id="Col1-1-Financials-Proxy"]/section/div[3]/table')%>%
                                    html_table(fill = TRUE)
                            IS <- p[[1]]
                            colnames(IS) <- paste(IS[1,])
                            IS <- IS[-c(1,5,12,20,25),]
                            names_row <- paste(IS[,1])
                            IS <- IS[,-1]
                            IS <- apply(IS,2,function(x){gsub(",","",x)})
                            IS <- as.data.frame(apply(IS,2,as.numeric))
                            rownames(IS) <- paste(names_row)
                            temp1 <- IS
                            url <- "https://finance.yahoo.com/quote/"
                            url <- paste0(url,stock[i],"/balance-sheet?p=",stock[i])
                            wahis.session <- html_session(url)
                            p <-    wahis.session %>%
                                    html_nodes(xpath = '//*[@id="Col1-1-Financials-Proxy"]/section/div[3]/table')%>%
                                    html_table(fill = TRUE)
                            BS <- p[[1]]
                            colnames(BS) <- BS[1,]
                            BS <- BS[-c(1,2,17,28),]
                            names_row <- BS[,1]
                            BS <- BS[,-1] 
                            BS <- apply(BS,2,function(x){gsub(",","",x)})
                            BS <- as.data.frame(apply(BS,2,as.numeric))
                            rownames(BS) <- paste(names_row)
                            temp2 <- BS
                            url <- "https://finance.yahoo.com/quote/"
                            url <- paste0(url,stock[i],"/cash-flow?p=",stock[i])
                            wahis.session <- html_session(url)
                            p <-    wahis.session %>%
                                    html_nodes(xpath = '//*[@id="Col1-1-Financials-Proxy"]/section/div[3]/table')%>%
                                    html_table(fill = TRUE)
                            CF <- p[[1]]
                            colnames(CF) <- CF[1,]
                            CF <- CF[-c(1,3,11,16),]
                            names_row <- CF[,1]
                            CF <- CF[,-1] 
                            CF <- apply(CF,2,function(x){gsub(",","",x)})
                            CF <- as.data.frame(apply(CF,2,as.numeric))
                            rownames(CF) <- paste(names_row)
                            temp3 <- CF
                            assign(paste0(stock[i],'.f'),value = list(IS = temp1,BS = temp2,CF = temp3),envir = parent.frame())

                    },
                    error = function(cond){
                            message(stock[i], "Give error ",cond)
                    }
            )
    }
}

您可以将其称为scrapy_stocks(c("AAPL","GOOGL"))并访问其数据AAPL.f$ISAAPL.f$BSAAPL.f$CF

© www.soinside.com 2019 - 2024. All rights reserved.