mnist训练集和测试集图像归一化后,训练准确率极低

问题描述 投票:0回答:0

我想基于tensorflow对MNIST中的手写数字进行分类,但是在对训练集和测试集之间的像素进行归一化后,训练集的准确率极低!

请大家帮我看看原因,代码如下:

import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data

# Load the MNIST dataset
mnist = input_data.read_data_sets('original_data/', one_hot=True)

# Define model inputs and outputs
x = tf.placeholder(tf.float32, [None, 784])
y = tf.placeholder(tf.float32, [None, 10])

# Define variables and models
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
y_pred = tf.nn.softmax(tf.matmul(x, W) + b)

# Define the loss function and optimizer
cross_entropy = tf.reduce_mean(
    tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=y_pred))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)

# Define the evaluation function
correct_prediction = tf.equal(tf.argmax(y_pred, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

# Normalization treatment
train_images = mnist.train.images / 255.0
test_images = mnist.test.images / 255.0

with tf.Session() as sess:
    tf.global_variables_initializer().run()

    for epoch in range(20):
        # Training model
        for i in range(55000):
            batch_xs, batch_ys = mnist.train.next_batch(100)
            sess.run(train_step, feed_dict={x: batch_xs, y: batch_ys})

        # Print the accuracy on the training set and test set
        train_acc = sess.run(accuracy, feed_dict={x: train_images, y: mnist.train.labels})
        test_acc = sess.run(accuracy, feed_dict={x: test_images, y: mnist.test.labels})
        print("Epoch %d: train accuracy %f, test accuracy %f" % (epoch+1, train_acc, test_acc))

代码输出如下:

我试着注释掉规范化的代码

即去掉代码:

train_images = mnist.train.images / 255.0 
test_images = mnist.test.images / 255.0

并使用非标准化训练集和测试集进行训练。

结果表明,在训练集上准确率约为0.9。不知道为什么图片归一化后准确率这么低

请帮我解决这个问题。

python tensorflow logistic-regression mnist
© www.soinside.com 2019 - 2024. All rights reserved.