如何在 R XGboost 中找到每个预测的重要变量

问题描述 投票:0回答:1

我将 xgboost 应用于以下数据集并进行预测,我也能够获得整个模型最重要的特征,但是我也想知道每个预测哪些是最重要的特征,我我能够使用 dalex 包为每个预测查找重要变量,但出现错误

请在下面找到代码

rm(list=ls(all=T))
library("iBreakDown")
library("breakDown")
library("xgboost")
library("DALEX")
library("ingredients")
data(HR_data)
head(HR_data)
table(HR_data$left)
str(HR_data)

label<-HR_data$left
HR_data<-HR_data%>%select(-c(sales,salary,left))




#trian and tes split
n=nrow(HR_data)
train.index = sample(n,floor(0.75*n))
train.data = as.matrix(HR_data[train.index,])
train.label = label[train.index]
test.data = as.matrix(HR_data[-train.index,])
test.label = label[-train.index]

## set the seed to make your partition reproducible

xgb.train = xgb.DMatrix(data=train.data,label=train.label)
xgb.test = xgb.DMatrix(data=test.data,label=test.label)

params = list(
  booster="gbtree",
  eta=0.001,
  max_depth=5,
  gamma=3,
  subsample=0.75,
  colsample_bytree=1,
  objective="binary:logistic",
  eval_metric="auc"
)



xgb.fit=xgb.train(
  params=params,
  data=xgb.train,
  nrounds=10000,
  nthreads=1,
  early_stopping_rounds=10,
  watchlist=list(val1=xgb.train,val2=xgb.test),
  verbose=0
)

xgb.fit

xgb.pred = predict(xgb.fit,test.data,reshape=T)
xgb.pred = as.data.frame(xgb.pred)

### important Variables
xi <- xgb.importance(colnames(xgb.train), model = xgb.fit)


### using train data to find the best attributes of it's prediciton 
train_d<-as.data.frame(train.data)
train_l<-as.data.frame(train.label)
colnames(train_l)<-"left"
train_df<-cbind(train_d,train_l)

### exgboost explainer
library("DALEX")


model_martix_train <- model.matrix(train_df$left ~.-1,train_df)
data_train <- xgb.DMatrix(model_martix_train, label = train_df$left)


xgb_model <- xgb.train(param=params, data_train, nrounds = 50)
xgb_model


predict_logit <- function(model, x) {
  raw_x <- predict(model, x)
  exp(raw_x)/(1 + exp(raw_x))
}
logit <- function(x) exp(x)/(1+exp(x))


explainer_xgb <- explain(xgb_model, 
                         data = model_martix_train, 
                         y = train_df$left, 
                         predict_function = predict_logit,
                         link = logit,
                         label = "xgboost")

nobs <- model_martix_train[1:50, , drop = FALSE]
sp_xgb  <- break_down(explainer_xgb, observation = nobs)

我在使用break_down时遇到错误,错误是

break_down(explainer_xgb,观察= nobs)中的错误: 未使用的参数(观察= nobs)

当我使用下面的代码时,它没有给出错误,但是当我尝试对我的数据集使用相同的逻辑时,我收到错误。

下面的代码运行没有错误

library("iBreakDown") library("breakDown") library("xgboost") library("DALEX") library("ingredients") data(HR_data) model_martix_train <- model.matrix(left ~ . - 1, HR_data) data_train <- xgb.DMatrix(model_martix_train, label = HR_data$left) param <- list(max_depth = 2, eta = 1, silent = 1, nthread = 2, objective = "binary:logistic", eval_metric = "auc") HR_xgb_model <- xgb.train(param, data_train, nrounds = 50) predict_logit <- function(model, x) { raw_x <- predict(model, x) exp(raw_x)/(1 + exp(raw_x)) } logit <- function(x) exp(x)/(1+exp(x)) ### Explainer from dalex explainer_xgb <- explain(HR_xgb_model, data = model_martix_train, y = HR_data$left, predict_function = predict_logit, link = logit, label = "xgboost") ### predicitons Plot nobs <- model_martix_train[1, , drop = FALSE] sp_xgb <-break_down(explainer_xgb, nobs) plot(sp_xgb)
如果有人可以帮助我,如果还有其他方法可以为每个预测找到最佳属性,我将不胜感激,我正在寻找其他替代解决方案的原因是因为我有超过 300 万行的数据框并使用 dalex会非常耗时。

r machine-learning xgboost feature-selection dalex
1个回答
0
投票
我首先得到了 gb线性 XGBoost 模型的模型权重:

model_weights = as.data.frame(xgb.importance(model = your_model_name))
然后对于每个观测值,我用特征权重 * 观测值来得到一个系数。在幕后,XGBoost 算法对最终预测值有其他影响,例如截距,但这将使您对单个观察预测的特征重要性有一个粗略的了解。

如果您想要更接近模型实际计算的结果,您可以尝试:

xgb.dump(your_model_name)
这将为您提供截距以及之前的所有权重。即使您尝试使用所有这些数据手动计算观测的预测,它仍然可能会失败。这可能是因为您的参数 base_score 默认设置为 0.5,该值由模型添加到预测中。

© www.soinside.com 2019 - 2024. All rights reserved.