Keras ROC与Scikit ROC不同吗?

问题描述 投票:1回答:1

从下面的代码中,看起来好像用keras和scikit评估roc确实有所不同。有人知道解释吗?

import tensorflow as tf
from keras.layers import Dense, Input, Dropout
from keras import Sequential
import keras
from keras.constraints import maxnorm
from sklearn.metrics import roc_auc_score

# training data: X_train, y_train
# validation data: X_valid, y_valid

# Define the custom callback we will be using to evaluate roc with scikit
class MyCustomCallback(tf.keras.callbacks.Callback):

    def on_epoch_end(self,epoch, logs=None):
        y_pred = model.predict(X_valid)
        print("roc evaluated with scikit = ",roc_auc_score(y_valid, y_pred))
        return

# Define the model.

def model(): 

    METRICS = [ 
          tf.keras.metrics.BinaryAccuracy(name='accuracy'),
          tf.keras.metrics.AUC(name='auc'),
    ]

    optimizer="adam"
    dropout=0.1
    init='uniform'
    nbr_features= vocab_size-1 #2500
    dense_nparams=256

    model = Sequential()
    model.add(Dense(dense_nparams, activation='relu', input_shape=(nbr_features,), kernel_initializer=init,  kernel_constraint=maxnorm(3)))
    model.add(Dropout(dropout))
    model.add(Dense(1, activation='sigmoid'))
    model.compile(loss='binary_crossentropy', optimizer=optimizer,metrics = METRICS)
    return model

# instantiate the model
model = model()

# fit the model
history = model.fit(x=X_train, y=y_train, batch_size = 8, epochs = 8, verbose=1,validation_data = (X_valid,y_valid), callbacks=[MyCustomCallback()], shuffle=True, validation_freq=1, max_queue_size=10, workers=4, use_multiprocessing=True)

输出:

Train on 4000 samples, validate on 1000 samples
Epoch 1/8
4000/4000 [==============================] - 15s 4ms/step - loss: 0.7950 - accuracy: 0.7149 - auc: 0.7213 - val_loss: 0.7551 - val_accuracy: 0.7608 - val_auc: 0.7770
roc evaluated with scikit =  0.78766515781747
Epoch 2/8
4000/4000 [==============================] - 15s 4ms/step - loss: 0.0771 - accuracy: 0.8235 - auc: 0.8571 - val_loss: 1.0803 - val_accuracy: 0.8574 - val_auc: 0.8954
roc evaluated with scikit =  0.7795984218252997
Epoch 3/8
4000/4000 [==============================] - 14s 4ms/step - loss: 0.0085 - accuracy: 0.8762 - auc: 0.9162 - val_loss: 1.2084 - val_accuracy: 0.8894 - val_auc: 0.9284
roc evaluated with scikit =  0.7705172905961992
Epoch 4/8
4000/4000 [==============================] - 14s 4ms/step - loss: 0.0025 - accuracy: 0.8982 - auc: 0.9361 - val_loss: 1.1700 - val_accuracy: 0.9054 - val_auc: 0.9424
roc evaluated with scikit =  0.7808804338960933
Epoch 5/8
4000/4000 [==============================] - 14s 4ms/step - loss: 0.0020 - accuracy: 0.9107 - auc: 0.9469 - val_loss: 1.1887 - val_accuracy: 0.9150 - val_auc: 0.9501
roc evaluated with scikit =  0.7811174659489438
Epoch 6/8
4000/4000 [==============================] - 14s 4ms/step - loss: 0.0018 - accuracy: 0.9184 - auc: 0.9529 - val_loss: 1.2036 - val_accuracy: 0.9213 - val_auc: 0.9548
roc evaluated with scikit =  0.7822898825544409
Epoch 7/8
4000/4000 [==============================] - 14s 4ms/step - loss: 0.0017 - accuracy: 0.9238 - auc: 0.9566 - val_loss: 1.2231 - val_accuracy: 0.9258 - val_auc: 0.9579
roc evaluated with scikit =  0.7817036742516923
Epoch 8/8
4000/4000 [==============================] - 14s 4ms/step - loss: 0.0016 - accuracy: 0.9278 - auc: 0.9592 - val_loss: 1.2426 - val_accuracy: 0.9293 - val_auc: 0.9600
roc evaluated with scikit =  0.7817419052279585

您可能会看到,从第2阶段开始,keras和scikit的验证ROC开始出现分歧。如果我拟合模型,然后使用keras的model.evaluate(X_valid, y_valid),也会发生相同的情况。任何帮助是极大的赞赏。

编辑:在一个单独的测试集上测试模型,我得到roc = 0.76,所以scikit似乎给出了正确的答案(顺便说一句X_train有4000个条目,X_valid有1000个条目,测试有15000个,这是非常规的拆分,但是它是由强制执行的外在因素)。同样,关于如何提高性能的建议也同样受到赞赏。

machine-learning keras scikit-learn nlp roc
1个回答
0
投票

问题在于您传递给sklearn函数进行roc_auc_score()计算的参数。您应该使用model.predict_proba()而不是model.predict()

def on_epoch_end(self,epoch, logs=None):
        y_pred = model.predict_proba(X_valid)
        print("roc evaluated with scikit = ",roc_auc_score(y_valid, y_pred))
        return
© www.soinside.com 2019 - 2024. All rights reserved.