使用keras对mnist的测试精度明显高于tensorflow.keras。

问题描述 投票:4回答:1

我用一个基本的例子验证了我的TensorFlow (v2.2.0)、Cuda (10.1)和cudnn (libcudnn7-dev_7.6.5.32-1+cuda10.1_amd64.deb),但我得到了奇怪的结果......。

当在Keras中运行以下例子时,如图所示。https:/keras.ioexamplesmnist_cnn。 我得到了~99%的acc @validation。当我调整通过TensorFlow运行的导入时,我只得到86%。

我可能忘记了什么。

要使用tensorflow运行。

from __future__ import print_function

import tensorflow as tf
from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Flatten
from tensorflow.keras.layers import Conv2D, MaxPooling2D
from tensorflow.keras import backend as K

batch_size = 128
num_classes = 10
epochs = 12

# input image dimensions
img_rows, img_cols = 28, 28

# the data, split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()

if K.image_data_format() == 'channels_first':
    x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
    x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
    input_shape = (1, img_rows, img_cols)
else:
    x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
    x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
    input_shape = (img_rows, img_cols, 1)

x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')

# convert class vectors to binary class matrices
y_train = tf.keras.utils.to_categorical(y_train, num_classes)
y_test = tf.keras.utils.to_categorical(y_test, num_classes)

model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
                 activation='relu',
                 input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))

model.compile(loss=tf.keras.losses.categorical_crossentropy,
              optimizer=tf.optimizers.Adadelta(),
              metrics=['accuracy'])

model.fit(x_train, y_train,
          batch_size=batch_size,
          epochs=epochs,
          verbose=1,
          validation_data=(x_test, y_test))
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])

遗憾的是,我得到的结果如下:

Epoch 2/12
469/469 [==============================] - 3s 6ms/step - loss: 2.2245 - accuracy: 0.2633 - val_loss: 2.1755 - val_accuracy: 0.4447
Epoch 3/12
469/469 [==============================] - 3s 7ms/step - loss: 2.1485 - accuracy: 0.3533 - val_loss: 2.0787 - val_accuracy: 0.5147
Epoch 4/12
469/469 [==============================] - 3s 6ms/step - loss: 2.0489 - accuracy: 0.4214 - val_loss: 1.9538 - val_accuracy: 0.6021
Epoch 5/12
469/469 [==============================] - 3s 6ms/step - loss: 1.9224 - accuracy: 0.4845 - val_loss: 1.7981 - val_accuracy: 0.6611
Epoch 6/12
469/469 [==============================] - 3s 6ms/step - loss: 1.7748 - accuracy: 0.5376 - val_loss: 1.6182 - val_accuracy: 0.7039
Epoch 7/12
469/469 [==============================] - 3s 6ms/step - loss: 1.6184 - accuracy: 0.5750 - val_loss: 1.4296 - val_accuracy: 0.7475
Epoch 8/12
469/469 [==============================] - 3s 7ms/step - loss: 1.4612 - accuracy: 0.6107 - val_loss: 1.2484 - val_accuracy: 0.7719
Epoch 9/12
469/469 [==============================] - 3s 6ms/step - loss: 1.3204 - accuracy: 0.6402 - val_loss: 1.0895 - val_accuracy: 0.7945
Epoch 10/12
469/469 [==============================] - 3s 6ms/step - loss: 1.2019 - accuracy: 0.6650 - val_loss: 0.9586 - val_accuracy: 0.8097
Epoch 11/12
469/469 [==============================] - 3s 7ms/step - loss: 1.1050 - accuracy: 0.6840 - val_loss: 0.8552 - val_accuracy: 0.8216
Epoch 12/12
469/469 [==============================] - 3s 7ms/step - loss: 1.0253 - accuracy: 0.7013 - val_loss: 0.7734 - val_accuracy: 0.8337
Test loss: 0.7734305262565613
Test accuracy: 0.8337000012397766

和导入Keras时的99.25%相差甚远。我错过了什么?

tensorflow testing optimization keras mnist
1个回答
2
投票

keras和tensorflow.keras的优化器参数不一致。

所以问题的关键在于Keras和Tensorflow中Adadelta优化器的默认参数不同。具体来说,就是不同的学习率。我们可以通过一个简单的检查来了解这个问题。使用Keras版本的代码。print(keras.optimizers.Adadelta().get_config()) 脓包

{'learning_rate': 1.0, 'rho': 0.95, 'decay': 0.0, 'epsilon': 1e-07}

而在Tensorflow版本中。print(tf.optimizers.Adadelta().get_config() 给我们

{'name': 'Adadelta', 'learning_rate': 0.001, 'decay': 0.0, 'rho': 0.95, 'epsilon': 1e-07}

我们可以看到,Adadelta优化器的学习率之间存在差异。Keras的默认学习率为 1.0 而Tensorflow的默认学习率为 0.001 与他们的其他优化器一致)。

较高学习率的影响

由于Keras版本的Adadelta优化器具有更大的学习率,因此它的收敛速度要快得多,并在12个epoch内实现了很高的准确率,而Tensorflow Adadelta优化器需要更长的训练时间。如果你增加训练epochs的数量,Tensorflow模型也有可能达到99%的准确率。

修复方法

但是,我们可以不增加训练时间,而是简单地将Tensorflow模型初始化为与Keras模型类似的行为方式,将Adadelta的学习率改为 1.0

model.compile(
    loss=tf.keras.losses.categorical_crossentropy,
    optimizer=tf.optimizers.Adadelta(learning_rate=1.0), # Note the new learning rate
    metrics=['accuracy'])

进行这种改变,我们在Tensorflow上运行得到如下性能。

Epoch 12/12
60000/60000 [==============================] - 102s 2ms/sample - loss: 0.0287 - accuracy: 0.9911 - val_loss: 0.0291 - val_accuracy: 0.9907
Test loss: 0.029134796149221757
Test accuracy: 0.9907

接近预期的99. 25%的准确率。

p.s.顺便说一下,Keras和Tensorflow之间不同的默认参数似乎是一个已知的问题,曾经被修复过,但后来又恢复了。https:/github.comkeras-teamkeraspull12841)。 软件开发是很难的。

© www.soinside.com 2019 - 2024. All rights reserved.