CNN用于图像分类过度,显然不采取下一批

问题描述 投票:0回答:1

我正在尝试用TensorFlow创建一个CNN,它将来自google tutorials on CNNs的图像分类。我创建了一个加载图像数据集的function和一个提取训练批次的#Training Parameters learning_rate = 0.001 batch_size = 128 epochs = 10 MODE = 'TRAIN' # Function that loads the entire dataset of images (X) with their respective labels (Y). X and Y are two np.array len_X, X, Y = get_images( files_path=dataset_path, img_size_h=1000, img_size_w=48, mode='TRAIN', randomize=True ) # Function that load the batch from X and Y X_batch, Y_X_batch = next_batch( total=len_X, images=X, labels=Y, batch_size=batch_size, index=0 ) logits = cnn_model_fn(X_batch, MODE) prediction = tf.nn.softmax(logits) loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=logits, labels=Y_X_batch)) optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate) train_op = optimizer.minimize(loss) correct_predict = tf.equal(tf.argmax(prediction, 1), tf.argmax(Y_X_batch, 1)) accuracy = tf.reduce_mean(tf.cast(correct_predict, tf.float32)) init = tf.global_variables_initializer() best_acc=0 with tf.Session() as sess: sess.run(init) saver = tf.train.Saver() if MODE == 'TRAIN': print("TRAINING MODE") for step in range(1,epochs+1): for i in range(0, int(len_X/batch_size)+1): if i > 0: X_batch, Y_X_batch = next_batch( total=len_X, images=X, labels=Y, batch_size=batch_size, index=i ) sess.run(train_op) los, acc= sess.run([loss, accuracy]) if acc >= best_acc: best_acc = acc writer = tf.summary.FileWriter(TensorBoard_path, sess.graph) elif MODE=='TEST': # TEST MODE # sess.close() 。但即使我转到下一批,网络总是在同一批次上进行训练。数据集有10000个图像。我认为该模型不会进行下一批,因为我在不到10次迭代中达到100%的准确度。这里的代码:

def cnn_model_fn(X, MODE):

    # INPUT LAYER
    input_layer = tf.reshape(X, [-1, 1000, 48, 1])

    # CONVOLUTIONAL LAYER #1
    conv1 = tf.layers.conv2d(
        inputs=input_layer,
        filters=4,
        kernel_size=[10, 10],
        strides=(2, 2),
        padding="valid",
    )
    conv1_relu = tf.nn.relu(conv1)

    # POOLING LAYER #1
    pool1 = tf.layers.max_pooling2d(
        inputs=conv1_relu,
        pool_size=[2, 2],
        strides=2
    )

    # CONVOLUTIONAL LAYER #2
    conv2 = tf.layers.conv2d(
        inputs=pool1,
        filters=64,
        kernel_size=[5, 5],
        padding="same",
    )
    conv2_relu = tf.nn.relu(conv2)

    # POOLING LAYER #2
    pool2 = tf.layers.max_pooling2d(
        inputs=conv2_relu,
        pool_size=[2, 2],
        strides=2
    )
    x = tf.TensorShape.as_list(pool2.shape)

    pool2_flat = tf.reshape(pool2, [-1, x[1] * x[2] * x[3]])


    # DENSE LAYER
    dense = tf.layers.dense(
        inputs=pool2_flat,
        units=1024,
    )

    dense_relu = tf.nn.relu(dense)

    # AGGIUNGO L'OPERAZIONE DI DROPOUT
    dropout = tf.layers.dropout(
        inputs=dense_relu,
        rate=0.4,
        training=MODE == tf.estimator.ModeKeys.TRAIN
    )

    # LOGIT LAYER
    logits = tf.layers.dense(
        inputs=dropout,
        units=2
    )

    return logits

这是神经网络模型的结构:

Y_batch_placeholder = tf.placeholder(tf.float32 ,[None, 2] )
X_batch_placeholder = tf.placeholder(tf.float32 ,[None, 1000, 48])


logits = cnn_model_fn(X_batch_placeholder, MODE)
prediction = tf.nn.softmax(logits)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=logits, labels=Y_batch_placeholder))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
train_op = optimizer.minimize(loss)
correct_predict = tf.equal(tf.argmax(prediction, 1), tf.argmax(Y_batch_placeholder, 1))
accuracy = tf.reduce_mean(tf.cast(correct_predict, tf.float32))

非常感谢您的参与。

python tensorflow machine-learning deep-learning conv-neural-network
1个回答
0
投票

您没有使用tensorflow占位符。确保在您的损失函数中定义并使用X_batch_placeholder和Y_batch_placeholder,如下所示:

feed_dict

然后使用feed_dict运行您的会话。一些张量流变量是占位符,您需要使用_, los, acc = sess.run([train_op, loss, accuracy], feed_dict={X_batch_placeholder: X_batch, Y_batch_placeholder: Y_batch}) 在运行会话时为它们提供相关信息:

Tensorflow: When should I use or not use `feed_dict`?

看到这里:https://hanxiao.github.io/2017/07/07/Get-10x-Speedup-in-Tensorflow-Multi-Task-Learning-using-Python-Multiprocessing/和这里:qazxswpoi

© www.soinside.com 2019 - 2024. All rights reserved.