FitFailedWarning:估计器拟合失败。这些参数的训练测试分区的分数将设置为 nan

问题描述 投票:0回答:3

我正在尝试优化 XGB 回归模型的参数学习率和最大深度:

from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import cross_val_score
from xgboost import XGBRegressor

param_grid = [
    # trying learning rates from 0.01 to 0.2
    {'eta ':[0.01, 0.05, 0.1, 0.2]},
    # and max depth from 4 to 10
    {'max_depth': [4, 6, 8, 10]}
  ]

xgb_model = XGBRegressor(random_state = 0)
grid_search = GridSearchCV(xgb_model, param_grid, cv=5,
                           scoring='neg_root_mean_squared_error',
                           return_train_score=True)

grid_search.fit(final_OH_X_train_scaled, y_train)

final_OH_X_train_scaled
是仅包含数字特征的训练数据集。

y_train
是训练标签 - 也是数字。

这返回错误:

FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan.

我见过其他类似的问题,但还没有找到答案。

还尝试过:

param_grid = [
    # trying learning rates from 0.01 to 0.2
    # and max depth from 4 to 10
    {'eta ': [0.01, 0.05, 0.1, 0.2], 'max_depth': [4, 6, 8, 10]}   
  ]

但它会产生相同的错误。

编辑: 这是数据示例:

final_OH_X_train_scaled.head()

y_train.head()

编辑2:

数据样本可以通过以下方式检索:

final_OH_X_train_scaled = pd.DataFrame([[0.540617 ,1.204666 ,1.670791 ,-0.445424 ,-0.890944 ,-0.491098 ,0.094999 ,1.522411 ,-0.247443 ,-0.559572 ,0.0 ,0.0 ,0.0 ,0.0 ,0.0 ,0.0 ,0.0 ,1.0 ,0.0 ,0.0], 
                   [0.117467 ,-2.351903 ,0.718969 ,-0.119721 ,-0.874705 ,-0.530832 ,-1.385230 ,2.126612 ,-0.947731 ,-0.156967 ,0.0 ,0.0 ,0.0 ,0.0 ,0.0 ,1.0 ,0.0 ,0.0 ,0.0 ,0.0], 
                   [0.901138 ,-0.208256 ,-0.019134 ,0.265250 ,-0.889128 ,-0.467753 ,0.169306 ,-0.973256 ,0.056164 ,-0.671978 , 0.0 ,0.0 ,0.0 ,0.0 ,0.0 ,0.0 ,0.0 ,1.0 ,0.0 ,0.0],
                   [2.074639 ,0.100602 ,-1.645121 ,0.929598 ,0.811911 ,1.364560 ,0.337242 ,0.435187 ,-0.388075 ,1.279959 , 0.0 ,0.0 ,0.0 ,0.0 ,0.0 ,0.0 ,0.0 ,0.0 ,0.0 ,1.0], 
                   [2.198099 ,-0.496254 ,-0.917933 ,-1.418407 ,-0.975889 ,1.044495 ,0.254181 ,1.335285 ,2.079415 ,2.071974 , 0.0 ,0.0 ,0.0 ,0.0 ,0.0 ,1.0 ,0.0 ,0.0 ,0.0 ,0.0]],
                  columns=['cont0' ,'cont1' ,'cont2' ,'cont3' ,'cont4' ,'cont5' ,'cont6' ,'cont7' ,'cont8' ,'cont9' ,'31' ,'32' ,'33' ,'34' ,'35' ,'36' ,'37' ,'38' ,'39' ,'40'])
python scikit-learn xgboost
3个回答
4
投票

我能够重现该问题,但代码不适合,因为您的

eta
参数中有额外的空格!而不是这个:

{'eta ':[0.01, 0.05, 0.1, 0.2]},...

改成这样:

{'eta':[0.01, 0.05, 0.1, 0.2]},...

不幸的是,错误消息没有太大帮助。


0
投票

又例如,如果对于

LogisticRegression
你将网格设置为某物

grid_lr = {
'cls__class_weight': [None, 'balanced'],
'cls__C': [0, .001, .01, .1, 1]
}

你会得到类似的错误;原因是

C
只能取正浮点值。 因此,只需仔细检查超参数的命名或值就足以解决此问题。


0
投票

我遇到了同样的错误,原因不是命名中存在多余的空格。花了很长时间才找到发生了什么,所以我将其发布在这里:

我使用

Pipeline
中的
scikit-learn
作为估计器,其中包括一个
OneHotEncoder
,它根据训练集中存在的类别自动创建“0/1”特征。如果每个类别都有合理的代表,这通常会起作用。然而,有一个特征的类别非常稀疏(总体小于 1%),因此根据 CV 分割,类别丢失,因此 onehot 编码列也丢失。这种缺失在显式选择编码功能的管道中产生了问题。

要在使用

OneHotEncoder
时避免出现此问题,您应该显式指定预期类别或使用
min_frequency
参数。

© www.soinside.com 2019 - 2024. All rights reserved.