glmnet模型性能与增强算法相比

问题描述 投票:1回答:2

为了帮助我们了解机器学习,我正在编写一些示例,而不是表明一种方法比另一种更好,而是说明如何使用各种函数和调整哪些参数。我开始使用this blog比较BooST和xgboost,然后我成功地将gbm添加到了示例中。现在我正在尝试添加glmnet,但返回的alwyas模型对于两个系数都(接近)为零。要么我做错了,要么glmnet不是这个数据的正确算法。我想弄清楚它是哪个。这是我可重复的例子:

# Uncomment the following 2 lines if you need to install BooST (requires devtools)
#library(devtools)
#install_github("gabrielrvsc/BooST")

library(BooST)
library(xgboost)
library(gbm)
library(glmnet)

# Data generating process
dgp = function(N, r2){
  X = matrix(rnorm(N*2,0,1),N,2)
  X[,ncol(X)] = base::sample(c(0,1),N,replace=TRUE)
  aux = X
  yaux = cos(pi*(rowSums(X)))
  vyaux = var(yaux)
  ve = vyaux*(1-r2)/r2
  e = rnorm(N,0,sqrt(ve))
  y = yaux+e
  return(list(y = y, X = X))
}

# Real data
x1r = rep(seq(-4,4,length.out = 1000), 2)
x2r = c(rep(0,1000), rep(1,1000))
yr = cos(pi*(x1r+x2r))
real_function = data.frame(x1 = x1r, x2 = as.factor(x2r), y = yr)

# Train data (noisy)
set.seed(1)
data = dgp(N = 1000, r2 = 0.5)
y = data$y
x = data$X

# Test data (noisy)
set.seed(2)
dataout=dgp(N = 1000, r2 = 0.5)
yout = dataout$y
xout = dataout$X

# Set seed and train all 4 models
set.seed(1)
BooST_Model = BooST(x, y, v = 0.18, M = 300 , display = TRUE)
xgboost_Model = xgboost(x, label = y, nrounds = 300, params = list(eta = 0.14, max_depth = 2))
gbm_Model = gbm.fit(x, y, distribution = "gaussian", n.trees = 10000, shrinkage = .001, interaction.depth=5)
glmnet_Model = cv.glmnet(x, y, family = "gaussian", alpha=0)
coef(glmnet_Model)

coef(glmnet模型)

3 x 1稀疏矩阵类“dgCMatrix”1

(拦截)0.078072154632597062784427066617354284971952438

V1 -0.000000000000000000000000000000000000000033534

V2 -0.000000000000000000000000000000000000044661342

# Predict from test data
p_BooST = predict(BooST_Model, xout)
p_xgboost = predict(xgboost_Model, xout)
p_gbm = predict(gbm_Model, xout, n.trees=10000)
p_glmnet = predict(glmnet_Model, xout)

# Show RMSE
sqrt(mean((p_BooST - yout)^2))
sqrt(mean((p_xgboost - yout)^2))
sqrt(mean((p_gbm - yout)^2))
sqrt(mean((p_glmnet - yout)^2))

fitted = data.frame(x1 = x[,1], x2 = as.factor(x[,2]),
  BooST = fitted(BooST_Model),
  xgboost = predict(xgboost_Model, x),
  gbm = predict(object = gbm_Model, newdata = x, n.trees = 10000),
  glmnet = predict(glmnet_Model, newx = x, s=glmnet_Model$lambda.min)[, 1], y = y)

# Plot noisy Y
ggplot() + geom_point(data = fitted, aes(x = x1, y = y, color = x2)) + geom_line(data = real_function, aes(x = x1, y = y, linetype = x2))

# Plot xgboost
ggplot() + geom_point(data = fitted, aes(x = x1, y = y), color = "gray") + geom_point(data = fitted, aes(x = x1, y = xgboost, color = x2)) + geom_line(data = real_function, aes(x = x1, y = y, linetype = x2))

# Plot BooST
ggplot() + geom_point(data = fitted, aes(x = x1, y = y), color = "gray") + geom_point(data = fitted, aes(x = x1, y = BooST, color = x2)) + geom_line(data = real_function, aes(x = x1, y = y, linetype = x2))

# Plot gbm
ggplot() + geom_point(data = fitted, aes(x = x1, y = y), color = "gray") + geom_point(data = fitted, aes(x = x1, y = gbm, color = x2)) + geom_line(data = real_function, aes(x = x1, y = y, linetype = x2))

# Plot glmnet
ggplot() + geom_point(data = fitted, aes(x = x1, y = y), color = "gray") + geom_point(data = fitted, aes(x = x1, y = glmnet, color = x2)) + geom_line(data = real_function, aes(x = x1, y = y, linetype = x2))
r machine-learning glmnet
2个回答
0
投票

我要么做错了

你不是,至少编程方面

或glmnet不是这个数据的正确算法

并不是说glmnet“不正确”(虽然它应该主要用于许多预测器的问题,而不仅仅是一对);这是你的比较从根本上说是“不公平”而不合适:你使用的所有其他3种算法都是合奏的 - 例如,你的gbm由一万(10,000)个人决策树组成......!尝试将其与单个回归量(例如glmnet)进行比较,这几乎就像将苹果与橙子进行比较......

然而,这应该是一个很好的练习和提醒,从编程的角度来看,所有这些工具看起来都“等同”(“好吧,我只是用library()加载它们中的每一个,对吧?那么,为什么它们应该'是一种等同的和可比的?“),隐藏了很多......这就是为什么至少对统计学习原理的基本熟悉总是一个好主意(我强烈建议免费提供Introduction to Statistical Learning,对于初学者 - 包括R代码片段)。

特别是Adaboost的集合方法(这是你在这里使用的所有其他3种算法背后的统一元素),不是开玩笑!当它出现时(在深度学习时代大约10年之前)它是一个真正的游戏改变者,并且,在xgboost实施中,它仍然是大多数Kaggle比赛中涉及“传统”结构化数据的胜利选择(即没有文本)或图像)。


0
投票

请记住,glmnet适合线性模型,这意味着响应可以写成预测变量的线性组合:

 y = b0 + b1*x1 + b2*x2 + ...

在数据集中,您可以不同地定义响应

yaux = cos(pi*(rowSums(X)))

yr = cos(pi*(x1r+x2r))

在两种情况下,这显然不是预测因子的线性组合。

© www.soinside.com 2019 - 2024. All rights reserved.