R中的递归回归

问题描述 投票:4回答:3

假设我在R中有一个数据框,如下所示:

> set.seed(1)
> X <- runif(50, 0, 1)
> Y <- runif(50, 0, 1)
> df <- data.frame(X,Y)
> head(df)

          X          Y
1 0.2655087 0.47761962
2 0.3721239 0.86120948
3 0.5728534 0.43809711
4 0.9082078 0.24479728
5 0.2016819 0.07067905
6 0.8983897 0.09946616

如何在X上执行Y的递归回归,从前20个观察开始,一次一个观察增加回归窗口,直到它覆盖整个样本?

关于如何执行固定窗口长度的滚动回归有很多信息(例如,在rollapply包中使用zoo)。然而,当我找到一个简单的递归选项时,我的搜索努力已经徒劳无功,其中起点被修复并且窗口大小增加。一个例外是来自lm.fit.recursivepackage(quantreg)的here函数。这非常有效......但是它没有记录任何有关标准错误的信息,这是构建递归置信区间所需要的。

我当然可以使用循环来实现这一目标。但是,我的实际数据框架非常大,并且还按id分组,这会导致复杂化。所以我希望找到一个更有效的选择。基本上,我正在寻找St等于“滚动[...],递归”命令的R等价物。

r recursion regression lm
3个回答
8
投票

也许这会有所帮助:

set.seed(1)
X1 <- runif(50, 0, 1)
X2 <- runif(50, 0, 10) # I included another variable just for a better demonstration
Y <- runif(50, 0, 1)
df <- data.frame(X1,X2,Y)


rolling_lms <- lapply( seq(20,nrow(df) ), function(x) lm( Y ~ X1+X2, data = df[1:x , ]) )

使用上面的lapply函数,您可以使用完整信息进行递归回归。

例如,第一次回归有20次观察:

> summary(rolling_lms[[1]])

Call:
lm(formula = Y ~ X1 + X2, data = df[1:x, ])

Residuals:
     Min       1Q   Median       3Q      Max 
-0.45975 -0.19158 -0.05259  0.13609  0.67775 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)   
(Intercept)  0.61082    0.17803   3.431  0.00319 **
X1          -0.37834    0.23151  -1.634  0.12060   
X2           0.01949    0.02541   0.767  0.45343   
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.2876 on 17 degrees of freedom
Multiple R-squared:  0.1527,    Adjusted R-squared:  0.05297 
F-statistic: 1.531 on 2 and 17 DF,  p-value: 0.2446

并拥有您需要的所有信息。

> length(rolling_lms)
[1] 31

它从20次观察开始执行31次线性回归,直到达到50次。所有信息的回归都存储为rolling_lms列表的元素。

编辑

根据下面Carl的评论,为了获得每个回归的所有斜率的向量,在这种情况下对于X1变量,这是一种非常好的技术(如Carl所建议的):

all_slopes<-unlist(sapply(1:31,function(j) rolling_lms[[j]]$coefficients[2]))

输出:

> all_slopes
         X1          X1          X1          X1          X1          X1          X1          X1          X1          X1 
-0.37833614 -0.23231852 -0.20465589 -0.20458938 -0.11796060 -0.14621369 -0.13861210 -0.11906724 -0.10149900 -0.14045509 
         X1          X1          X1          X1          X1          X1          X1          X1          X1          X1 
-0.14331323 -0.14450837 -0.16214836 -0.15715630 -0.17388457 -0.11427933 -0.10624746 -0.09767893 -0.10111773 -0.06415914 
         X1          X1          X1          X1          X1          X1          X1          X1          X1          X1 
-0.06432559 -0.04492075 -0.04122131 -0.06138768 -0.06287532 -0.06305953 -0.06491377 -0.01389334 -0.01703270 -0.03683358 
         X1 
-0.02039574 

6
投票

您可能对biglm软件包中的biglm函数感兴趣。这允许您在数据子集上拟合回归,然后使用其他数据更新回归模型。最初的想法是将它用于大型数据集,以便您在任何给定时间只需要内存中的部分数据,但它符合您想要完美执行的操作(您可以将更新过程包装在循环中)。除了标准误差(和当然系数)之外,biglm对象的摘要还给出了置信区间。

library(biglm)

fit1 <- biglm( Sepal.Width ~ Sepal.Length + Species, data=iris[1:20,])
summary(fit1)

out <- list()
out[[1]] <- fit1

for(i in 1:130) {
  out[[i+1]] <- update(out[[i]], iris[i+20,])
}

out2 <- lapply(out, function(x) summary(x)$mat)
out3 <- sapply(out2, function(x) x[2,2:3])
matplot(t(out3), type='l')

如果您不想使用显式循环,则Reduce函数可以帮助:

fit1 <- biglm( Sepal.Width ~ Sepal.Length + Species, data=iris[1:20,])
iris.split <- split(iris, c(rep(NA,20),1:130))
out4 <- Reduce(update, iris.split, init=fit1, accumulate=TRUE)
out5 <- lapply(out4, function(x) summary(x)$mat)
out6 <- sapply(out5, function(x) x[2,2:3])

all.equal(out3,out6)

4
投票

Greg与biglm提出的解决方案比LyzandeR对lm提出的解决方案更快,但仍然相当缓慢。使用我在下面显示的功能可以避免很多开销。如果你使用Rcpp完成所有C ++,我认为你可以使它快得多

# simulate data
set.seed(101)
n <- 1000
X <- matrix(rnorm(10 * n), n, 10)
y <- drop(10 + X %*% runif(10)) + rnorm(n)
dat <- data.frame(y = y, X)

# assign wrapper for biglm
biglm_wrapper <- function(formula, data, min_window_size){
  mf <- model.frame(formula, data)
  X <- model.matrix(terms(mf), mf)
  y - model.response(mf)

  n <- nrow(X)
  p <- ncol(X)
  storage.mode(X) <- "double"
  storage.mode(y) <- "double"
  w <- 1
  qr <- list(
    d = numeric(p), rbar = numeric(choose(p, 2)), 
    thetab = numeric(p), sserr = 0, checked = FALSE, tol = numeric(p))
  nrbar = length(qr$rbar)
  beta. <- numeric(p)

  out <- NULL
  for(i in 1:n){
    row <- X[i, ] # will be over written
    qr[c("d", "rbar", "thetab", "sserr")] <- .Fortran(
      "INCLUD", np = p, nrbar = nrbar, weight = w, xrow = row, yelem = y[i], 
      d = qr$d, rbar = qr$rbar, thetab = qr$thetab, sserr = qr$sserr, ier = i - 0L, 
      PACKAGE = "biglm")[
        c("d", "rbar", "thetab", "sserr")]

    if(i >= min_window_size){
      tmp <- .Fortran(
        "REGCF", np = p, nrbar = nrbar, d = qr$d, rbar = qr$rbar,
        thetab = qr$thetab, tol = qr$tol, beta = beta., nreq = p, ier = i,
        PACKAGE = "biglm")
      Coef <- tmp$beta

      # compute vcov. See biglm/R/vcov.biglm.R
      R <- diag(p)
      R[row(R) > col(R)] <- qr$rbar
      R <- t(R)
      R <- sqrt(qr$d) * R
      ok <- qr$d != 0
      R[ok, ok] <- chol2inv(R[ok, ok, drop = FALSE])
      R[!ok, ] <- NA
      R[ ,!ok] <- NA
      out <- c(out, list(cbind(
        coef = Coef, 
        SE   = sqrt(diag(R * qr$sserr / (i - p + sum(!ok)))))))
    }
  }

  out
}

# assign function to compare with 
library(biglm)
f2 <- function(formula, data, min_window_size){
  fit1 <- biglm(formula, data = data[1:min_window_size, ])
  data.split <- 
    split(data, c(rep(NA,min_window_size),1:(nrow(data) - min_window_size)))
  out4 <- Reduce(update, data.split, init=fit1, accumulate=TRUE)
  lapply(out4, function(x) summary(x)$mat[, c("Coef", "SE")])
}

# show that the two gives the same
frm <- y ~ X1 + X2 + X3 + X4 + X5 + X6 + X7 + X8 + X9 + X10
r1 <- biglm_wrapper(frm, dat, 25)
r2 <- f2(frm, dat, 25)
all.equal(r1, r2, check.attributes = FALSE)
#R> [1] TRUE

# show run time
microbenchmark::microbenchmark(
  r1 = biglm_wrapper(frm, dat, 25), 
  r2 = f2(frm, dat, 25), 
  r3 = lapply(
    25:nrow(dat), function(x) lm(frm, data = dat[1:x , ])),
  times = 5)
#R> Unit: milliseconds
#R>  expr        min         lq       mean    median         uq        max neval cld
#R>    r1   43.72469   44.33467   44.79847   44.9964   45.33536   45.60124     5 a  
#R>    r2 1101.51558 1161.72464 1204.68884 1169.4580 1246.74321 1344.00278     5  b 
#R>    r3 2080.52513 2232.35939 2231.02060 2253.1420 2260.74737 2328.32908     5   c
© www.soinside.com 2019 - 2024. All rights reserved.