交叉验证,对一部分的精确评分

问题描述 投票:0回答:1
。 我想运行交叉验证并尝试多个估计器,但是我有兴趣以仅1和2类的精度进行评分。我不在乎类0的精确度,而且我不希望它的得分来放弃简历优化。我也不在乎任何课程的召回。换句话说,我想确保每当1或2预测时,它都会有很高的信心。

问题是,我该如何运行

cross_val_score
并告诉其评分函数以忽略第0类的精度?

update

:根据接受的答案,这是一个示例答案代码:

def custom_precision_score(y_true,y_pred):
  precision_tuple, recall_tuple, fscore_tuple, support_tuple = metrics.precision_recall_fscore_support(y_true, y_pred)
  precision_tuple = precision_tuple[1:]
  support_tuple = support_tuple[1:]
  weighted_precision = np.average(precision_tuple, weights=support_tuple)
  return weighted_precision

custom_scorer = metrics.make_scorer(custom_precision_score)

scores = cross_validation.cross_val_score(clf, featuresArray, targetArray, cv=10, scoring=custom_scorer)

cross_val_score

包括一个可以使用您自己的测试策略
使用
make_scorer
python machine-learning cross-validation scikit-learn
1个回答
5
投票
score_func(y, y_pred, **kwargs)

中测试的组,这是由

make_scorer
最新问题
© www.soinside.com 2019 - 2025. All rights reserved.