XGBoost:AttributeError:'DataFrame'对象没有属性'feature_names'

问题描述 投票:0回答:1

我已经训练了XGBoost分类器进行二进制分类。在使用CV训练火车数据模型并预测测试数据时,我面临错误AttributeError: 'DataFrame' object has no attribute 'feature_names'

我的代码如下:

folds = StratifiedKFold(n_splits=5, shuffle=False, random_state=44000)
oof = np.zeros(len(X_train))
predictions = np.zeros(len(X_test))

for fold_, (trn_idx, val_idx) in enumerate(folds.split(X_train, y_train)):
    print("Fold {}".format(fold_+1))
    trn_data = xgb.DMatrix(X_train.iloc[trn_idx], y_train.iloc[trn_idx])
    val_data = xgb.DMatrix(X_train.iloc[val_idx], y_train.iloc[val_idx])

    clf = xgb.train(params = best_params,
                    dtrain = trn_data, 
                    num_boost_round = 2000, 
                    evals = [(trn_data, 'train'), (val_data, 'valid')],
                    maximize = False,
                    early_stopping_rounds = 100, 
                    verbose_eval=100)

    oof[val_idx] = clf.predict(X_train.iloc[val_idx], ntree_limit=clf.best_ntree_limit)
    predictions += clf.predict(X_test, ntree_limit=clf.best_ntree_limit)/folds.n_splits

怎么处理呢?

这是完整的错误跟踪:

Fold 1
[0] train-auc:0.919667  valid-auc:0.822968
Multiple eval metrics have been passed: 'valid-auc' will be used for early stopping.

Will train until valid-auc hasn't improved in 100 rounds.
[100]   train-auc:1 valid-auc:0.974659
[200]   train-auc:1 valid-auc:0.97668
[300]   train-auc:1 valid-auc:0.977696
[400]   train-auc:1 valid-auc:0.977704
Stopping. Best iteration:
[376]   train-auc:1 valid-auc:0.977862

Exception ignored in: <bound method DMatrix.__del__ of <xgboost.core.DMatrix object at 0x7f3d9c285550>>
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/xgboost/core.py", line 368, in __del__
    if self.handle is not None:
AttributeError: 'DMatrix' object has no attribute 'handle'
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-55-d52b20cc0183> in <module>()
     19                     verbose_eval=100)
     20 
---> 21     oof[val_idx] = clf.predict(X_train.iloc[val_idx], ntree_limit=clf.best_ntree_limit)
     22 
     23     predictions += clf.predict(X_test, ntree_limit=clf.best_ntree_limit)/folds.n_splits

/usr/local/lib/python3.6/dist-packages/xgboost/core.py in predict(self, data, output_margin, ntree_limit, pred_leaf, pred_contribs, approx_contribs)
   1042             option_mask |= 0x08
   1043 
-> 1044         self._validate_features(data)
   1045 
   1046         length = c_bst_ulong()

/usr/local/lib/python3.6/dist-packages/xgboost/core.py in _validate_features(self, data)
   1271         else:
   1272             # Booster can't accept data with different feature names
-> 1273             if self.feature_names != data.feature_names:
   1274                 dat_missing = set(self.feature_names) - set(data.feature_names)
   1275                 my_missing = set(data.feature_names) - set(self.feature_names)

/usr/local/lib/python3.6/dist-packages/pandas/core/generic.py in __getattr__(self, name)
   3612             if name in self._info_axis:
   3613                 return self[name]
-> 3614             return object.__getattribute__(self, name)
   3615 
   3616     def __setattr__(self, name, value):

AttributeError: 'DataFrame' object has no attribute 'feature_names'
python pandas machine-learning scikit-learn xgboost
1个回答
0
投票

问题已经解决了。问题是,我没有将X_train.iloc[val_idx]转换为xgb.DMatrix。将X_train.iloc[val_idx]X_test转换为xgb.DMatrix之后,这个问题就消失了!

更新了以下两行:

oof[val_idx] = clf.predict(xgb.DMatrix(X_train.iloc[val_idx]), ntree_limit=clf.best_ntree_limit)
predictions += clf.predict(xgb.DMatrix(X_test), ntree_limit=clf.best_ntree_limit)/folds.n_splits
© www.soinside.com 2019 - 2024. All rights reserved.