如何使用Tensorflow计算AUC并生成RNN和LSTM模型的ROC曲线?

问题描述 投票:0回答:1

我正在使用自定义预定义函数qazxsw poi运行RNN和LSTM模型

trainDNN

上述函数在时间序列数据上训练RNN和LSTM模型并输出二进制分类得分。打印列车和测试分数,但我试图弄清楚如何计算AUC并生成RNN和LSTM二元分类的ROC曲线。

更新:

我使用以下脚本评估了logits和预测:

import tensorflow as tf
from tensorflow.contrib.layers import fully_connected
import h5py
import time
from sklearn.utils import shuffle
def trainDNN(path, n_days, n_features, n_neurons, 
            train_sequences, train_lengths, train_y,
            test_sequences, test_y, test_lengths,
            lstm=False, n_epochs=50, batch_size=256,
            learning_rate=0.0003, TRAIN_REC=8, TEST_REC=8):
    # we're doing binary classification
    n_outputs = 2

    # this is the initial learning rate
    # adam optimzer decays the learning rate automatically
#     learning_rate = 0.0001
    #learning rate decay is determined by epsilon
    epsilon = 0.001

    # setup the graph
    tf.reset_default_graph()

    # inputs to the network
    X = tf.placeholder(tf.float32, [None, n_days, n_features])
    y = tf.placeholder(tf.int32, [None])
    seq_length = tf.placeholder(tf.int32, [None])

    # the network itself
    cell = tf.contrib.rnn.BasicLSTMCell(num_units=n_neurons) if lstm else tf.contrib.rnn.BasicRNNCell(num_units=n_neurons)
    outputs, states = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32, sequence_length=seq_length)
    logits = fully_connected(states[-1] if lstm else states, n_outputs)

    # the training process (minimize loss) including the training operatin itself
    xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
    loss = tf.reduce_mean(xentropy)
    optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate, epsilon=epsilon)
    training_op = optimizer.minimize(loss)

    # hold onto the accuracy for the logwriter
    correct = tf.nn.in_top_k(logits, y, 1)
    accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))

    # this saves the network for later querying
    # currently only saves after all epochs are complete
    # but we could for example save checkpoints on a
    # regular basis
    saver = tf.train.Saver()

    # this is where we save the log files for tensorboard
    now = int(time.time())
    name = 'lstm' if lstm else 'rnn'
    root_logdir = path+"tensorflow_logs/{}/{}-{}/".format(name.upper(), name, now)
    train_logdir = "{}train".format(root_logdir)
    eval_logdir = "{}eval".format(root_logdir)
    print('train_logdir', train_logdir)
    print('eval_logdir', eval_logdir)

    # scalars that are written to the log files
    loss_summary = tf.summary.scalar('loss', loss)
    acc_summary = tf.summary.scalar('accuracy', accuracy)

    # summary operation and writer for the training data
    train_summary_op = tf.summary.merge([loss_summary, acc_summary])
    train_writer = tf.summary.FileWriter(train_logdir, tf.get_default_graph())
    # summary operation and writer for the validation data
    eval_summary_op = tf.summary.merge([loss_summary, acc_summary])
    eval_writer = tf.summary.FileWriter(eval_logdir, tf.get_default_graph())

    # initialize variables
    init = tf.global_variables_initializer()
    n_batches = len(train_sequences) // batch_size
    print(n_batches, 'batches of size', batch_size, n_epochs, 'epochs,', n_neurons, 'neurons')

    with tf.Session() as sess:
        # actually run the initialization
        init.run()
        start_time = time.time()
        for epoch in range(n_epochs):
            # at the beginning of each epoch, shuffle the training data
            train_sequences, train_y, train_lengths = shuffle(train_sequences, train_y, train_lengths)
            for iteration in range(n_batches):

                # extract the batch of training data for this iteration
                start = iteration*batch_size
                end = start+batch_size
                X_batch = train_sequences[start:end]
                y_batch = train_y[start:end]
                y_batch = y_batch.ravel()
                seq_length_batch = train_lengths[start:end]

                # every TRAIN_REC steps, save a summary of training accuracy & loss
                if iteration % TRAIN_REC == 0:
                    train_summary_str = train_summary_op.eval(
                        feed_dict = {X: X_batch, y: y_batch, seq_length: seq_length_batch}
                    )
                    step = epoch * n_batches + iteration
                    train_writer.add_summary(train_summary_str, step)
                    # without this flush, tensorboard isn't always current
                    train_writer.flush()

                # every TEST_REC steps, save a summary of validation accuracy & loss
                # TODO: this runs all validation data at once. if validation is
                # sufficiently large, this will fail. better would be to either
                # pick a random subset of validation data, or even better, run
                # validation in multiple batches and save the validation accuracy 
                # & loss based on the aggregation of all of the validation batches.
                if iteration % TEST_REC == 0:
                    summary_str = eval_summary_op.eval(
                        feed_dict = {X: test_sequences, y: test_y.ravel(), seq_length: test_lengths}
                    )
                    step = epoch * n_batches + iteration
                    eval_writer.add_summary(summary_str, step)
                    # without this flush, tensorboard isn't always current
                    eval_writer.flush()

                # run training.
                # this is where the network learns.
                sess.run(
                    training_op,
                    feed_dict = {X: X_batch, y: y_batch, seq_length: seq_length_batch}
                )

            # after every epoch, calculate the accuracy of the last seen training batch 
            acc_train = accuracy.eval(
                feed_dict = {X: X_batch, y: y_batch, seq_length: seq_length_batch}
            )
            # after each epoch, calculate the accuracy of the test data
            acc_test = accuracy.eval(
                feed_dict = {X: test_sequences, y: test_y.ravel(), seq_length: test_lengths}
            )

            # print the training & validation accuracy to the console
            print(epoch, time.strftime('%m/%d %H:%M:%S'), "Accuracy train:", acc_train, "test:", acc_test)


        # save the model (for more training or inference) after all
        # training is complete
        save_path = saver.save(sess, root_logdir+"model_final.ckpt")

        # close the writers
        train_writer.close()
        eval_writer.close()    
        log(["{}-{} model score".format(name.upper(), now), percent(acc_test)])

这将返回probs,它基本上是一个矩阵,行数等于测试用例数,2列包含2个二进制类中每个类的概率。预测对象包含预测是否正确。我持怀疑态度,因为ReLU功能概率分数不像sigmoid功能分数那样直观,因为它不再基于正面和负面预测的默认0.5截止值。相反,预测是基于哪个类具有更多概率。是否真的可以从ReLu输出生成ROC曲线?

python tensorflow deep-learning lstm rnn
1个回答
2
投票

你可以使用n_epochs = 2 batch_size = 2000 n_batches = len(train_sequences) // batch_size print(n_batches) with tf.Session() as sess: init.run() #sess.run( tf.local_variables_initializer() ) for epoch in range(n_epochs): train_sequences, train_y, train_lengths = shuffle(train_sequences, train_y, train_lengths) for iteration in range(n_batches): start = iteration*batch_size end = start+batch_size X_batch = train_sequences[start:end] y_batch = train_y[start:end] seq_length_batch = train_lengths[start:end] if iteration % 20 == 0: train_summary_str = train_summary_op.eval( feed_dict = {X: X_batch, y: y_batch, seq_length: seq_length_batch} ) step = epoch * n_batches + iteration if iteration % 200 == 0: summary_str = eval_summary_op.eval( feed_dict = {X: test_sequences, y: test_y, seq_length: test_lengths} ) step = epoch * n_batches + iteration sess.run( training_op, feed_dict = {X: X_batch, y: y_batch, seq_length: seq_length_batch} ) acc_train = accuracy.eval( feed_dict = {X: X_batch, y: y_batch, seq_length: seq_length_batch} ) acc_test = accuracy.eval( feed_dict = {X: test_sequences, y: test_y, seq_length: test_lengths} ) probs = logits.eval(feed_dict = {X: test_sequences, y: test_y, seq_length: test_lengths}) predictions = correct.eval(feed_dict = {logits:probs, y: test_y}) print(epoch, "Train accuracy:", acc_train, "Test accuracy:", acc_test)# "Manual score:", score) 来达到这个目的。请注意,您需要单热编码标签及其预测,如果您尝试在多个tf.metrics.auc()命令上累积AUC,还需要运行它返回的update_op,请参阅下面的单独部分。

在您的代码中,您使用sess.run()创建y_one_hot,并且您可以在tf.one_hot()之后将所有这些设置为正确:

accuracy

在开始训练循环之前,你需要初始化auc创建的局部变量,也许就在y_one_hot = tf.one_hot( y, n_outputs ) auc, auc_update_op = tf.metrics.auc( y_one_hot, logits ) 之后:

init.run()

然后当你运行准确度时,你还需要在sess.run( tf.initialize_local_variables() ) 中使用auc而不是accuracy运行sess.run()(未经测试):

.eval()

累计多批次

如果您确实想使用# after every epoch, calculate the accuracy of the last seen training batch acc_train, auc_val = sess.run( [ accuracy, auc ], feed_dict = {X: X_batch, y: y_batch, seq_length: seq_length_batch} ) # after each epoch, calculate the accuracy of the test data acc_test, auc_val = sess.run( [ accuracy, auc ], feed_dict = {X: test_sequences, y: test_y.ravel(), seq_length: test_lengths} ) 的累积功能,那么一旦您想要开始新的计算,您还需要注意重置累积。为此,您需要收集创建的局部变量。所以创建这样的auc:

tf.metrics.auc()

当你完成累积时,重置auc的内部变量,如下所示:

with tf.variable_scope( "AUC" ):
    auc, auc_update_op = tf.metrics.auc( predictions=y_pred, labels=y_true, curve = 'ROC' )
auc_variables = [ v for v in tf.local_variables() if v.name.startswith( "AUC" ) ]
auc_reset_op = tf.initialize_variables( auc_variables )

而且你还需要确保每次运行session.run( auc_reset_op ) 时都运行auc_update_op

sess.run()
© www.soinside.com 2019 - 2024. All rights reserved.