在MLPClassification Python中实现K-fold交叉验证

问题描述 投票:1回答:3

我正在学习如何使用scikit-learn开发反向传播神经网络。我仍然对如何在我的神经网络中实现k折交叉验证感到困惑。我希望你们能帮助我。我的代码如下:

import numpy as np
from sklearn.model_selection import KFold
from sklearn.neural_network import MLPClassifier

f = open("seeds_dataset.txt")
data = np.loadtxt(f)

X=data[:,0:]
y=data[:,-1]
kf = KFold(n_splits=10)
X_train, X_test, y_train, y_test = X[train], X[test], y[train], y[test]
clf = MLPClassifier(solver='lbfgs', alpha=1e-5, hidden_layer_sizes=(5, 2), random_state=1)
clf.fit(X, y)
MLPClassifier(activation='relu', alpha=1e-05, batch_size='auto',
       beta_1=0.9, beta_2=0.999, early_stopping=False,
       epsilon=1e-08, hidden_layer_sizes=(5, 2), learning_rate='constant',
       learning_rate_init=0.001, max_iter=200, momentum=0.9,
       nesterovs_momentum=True, power_t=0.5, random_state=1, shuffle=True,
       solver='lbfgs', tol=0.0001, validation_fraction=0.1, verbose=False,
       warm_start=False)
python neural-network backpropagation
3个回答
4
投票

不要将数据拆分成列车并进行测试。这由KFold交叉验证自动处理。

from sklearn.model_selection import KFold
kf = KFold(n_splits=10)
clf = MLPClassifier(solver='lbfgs', alpha=1e-5, hidden_layer_sizes=(5, 2), random_state=1)

for train_indices, test_indices in kf.split(X):
    clf.fit(X[train_indices], y[train_indices])
    print(clf.score(X[test_indices], y[test_indices]))

KFold验证将您的数据集分成n个相等的公平部分。然后将每个部分分成测试和训练。有了这个,您可以相对准确地测量模型的准确性,因为它是在相当分布的数据的一小部分上进行测试的。


2
投票

感谢@ COLDSPEED的回答。

如果您希望预测n折叠交叉验证,则可以使用cross_val_predict()。

# Scamble and subset data frame into train + validation(80%) and test(10%)
df = df.sample(frac=1).reset_index(drop=True)
train_index = 0.8
df_train = df[ : len(df) * train_index]

# convert dataframe to ndarray, since kf.split returns nparray as index
feature = df_train.iloc[:, 0: -1].values
target = df_train.iloc[:, -1].values

solver = MLPClassifier(activation='relu', solver='adam', alpha=1e-5, hidden_layer_sizes=(5, 2), random_state=1, verbose=True)
y_pred = cross_val_predict(solver, feature, target, cv = 10)

基本上,选项cv表示您希望在培训中进行多少次交叉验证。 y_pred与目标大小相同。


0
投票

如果您正在寻找已经内置的方法来执行此操作,您可以查看cross_validate

from sklearn.model_selection import cross_validate 

model = MLPClassifier() 
cv_results = cross_validate(model, X, Y, cv=10, 
                            return_train_score=False, 
                            scoring=model.score) 
print("Fit scores: {}".format(cv_results['test_score']))

我喜欢这种方法的方法是让你访问fit_time,score_time和test_score。它还允许您提供您选择的评分指标和交叉验证生成器/可迭代(即Kfold)。另一个很好的资源是Cross Validation

© www.soinside.com 2019 - 2024. All rights reserved.