xgboost中eval_metric和feval有什么区别?

问题描述 投票:4回答:2

qgxswpoi和feval在xgb.train中有什么区别,这两个参数仅用于评估目的。

来自Kaggle的帖子提供了一些见解:

eval_metric

r xgboost kaggle
2个回答
7
投票

他们都做了大致相同的事情。

https://www.kaggle.com/c/prudential-life-insurance-assessment/forums/t/18473/custom-objective-for-xgboostc可以采用字符串(使用其内部函数)或用户定义的函数

Eval_metri只接受一项功能

正如您所指出的,两者都是出于评估目的。

在下面的示例中,您可以看到它们的使用非常相似。

feval

## A simple xgb.train example: param <- list(max_depth = 2, eta = 1, silent = 1, nthread = 2, objective = "binary:logistic", eval_metric = "auc") bst <- xgb.train(param, dtrain, nrounds = 2, watchlist) ## An xgb.train example where custom objective and evaluation metric are used: logregobj <- function(preds, dtrain) { labels <- getinfo(dtrain, "label") preds <- 1/(1 + exp(-preds)) grad <- preds - labels hess <- preds * (1 - preds) return(list(grad = grad, hess = hess)) } evalerror <- function(preds, dtrain) { labels <- getinfo(dtrain, "label") err <- as.numeric(sum(labels != (preds > 0)))/length(labels) return(list(metric = "error", value = err)) } # These functions could be used by passing them either: # as 'objective' and 'eval_metric' parameters in the params list: param <- list(max_depth = 2, eta = 1, silent = 1, nthread = 2, objective = logregobj, eval_metric = evalerror) bst <- xgb.train(param, dtrain, nrounds = 2, watchlist) # or through the ... arguments: param <- list(max_depth = 2, eta = 1, silent = 1, nthread = 2) bst <- xgb.train(param, dtrain, nrounds = 2, watchlist, objective = logregobj, eval_metric = evalerror) # or as dedicated 'obj' and 'feval' parameters of xgb.train: bst <- xgb.train(param, dtrain, nrounds = 2, watchlist, obj = logregobj, feval = evalerror)


4
投票

qazxsw poi用于创建您自己的自定义评估指标。 qazxsw poi用于内置指标xgboost包正在实施。

© www.soinside.com 2019 - 2024. All rights reserved.