为什么 VGG-16 在 CIFAR-10 数据集上表现不佳?

问题描述 投票:0回答:2

我正在尝试使用 Tensorflow 为 CIFAR-10 数据集实现 VGG-16 卷积神经网络。但我已经接近训练准确率的 10% 了。我的代码有什么问题吗?

import tensorflow as tf
from tensorflow.keras import datasets
(X_train, y_train), (X_test, y_test) = datasets.cifar10.load_data()

X_train.shape, y_train.shape, X_test.shape, y_test.shape

X_train = X_train/255
X_test = X_test/255
y_train = y_train.reshape(-1,)

model = tf.keras.Sequential([
tf.keras.layers.Conv2D(filters=64, kernel_size=(3,3), activation="relu", input_shape= 
(32,32,3),padding="same"),
tf.keras.layers.Conv2D(filters=64, kernel_size=(3,3), activation="relu", 
padding="same"),
tf.keras.layers.MaxPool2D(pool_size=(2,2), strides=(2,2)),
tf.keras.layers.Conv2D(filters=128, kernel_size=(3,3), activation="relu", 
padding="same"),
tf.keras.layers.Conv2D(filters=128, kernel_size=(3,3), activation="relu", 
padding="same"),
tf.keras.layers.MaxPool2D(pool_size=(2,2), strides=(2,2)),
tf.keras.layers.Conv2D(filters=256, kernel_size=(3,3), activation="relu", 
padding="same"),
tf.keras.layers.Conv2D(filters=256, kernel_size=(3,3), activation="relu", 
padding="same"),
tf.keras.layers.Conv2D(filters=256, kernel_size=(3,3), activation="relu", 
padding="same"),
tf.keras.layers.MaxPool2D(pool_size=(2,2), strides=(2,2)),
tf.keras.layers.Conv2D(filters=512, kernel_size=(3,3), activation="relu", 
padding="same"),
tf.keras.layers.Conv2D(filters=512, kernel_size=(3,3), activation="relu", 
padding="same"),
tf.keras.layers.Conv2D(filters=512, kernel_size=(3,3), activation="relu", 
padding="same"),
tf.keras.layers.MaxPool2D(pool_size=(2,2), strides=(2,2)),
tf.keras.layers.Conv2D(filters=512, kernel_size=(3,3), activation="relu", 
padding="same"),
tf.keras.layers.Conv2D(filters=512, kernel_size=(3,3), activation="relu", 
padding="same"),
tf.keras.layers.Conv2D(filters=512, kernel_size=(3,3), activation="relu", 
padding="same"),
tf.keras.layers.MaxPool2D(pool_size=(2,2), strides=(2,2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(4096, activation="relu"),
tf.keras.layers.Dense(4096, activation="relu"),
tf.keras.layers.Dense(10, activation="softmax")
])

model.summary()

model.compile(loss=tf.keras.losses.sparse_categorical_crossentropy,
          optimizer=tf.keras.optimizers.Adam(),
          metrics=["accuracy"])

X_train[0].shape, y_train[0].shape

model.fit(X_train, y_train, epochs = 100)
tensorflow keras deep-learning conv-neural-network vgg-net
2个回答
1
投票

看来您还没有找到合适的训练计划。

如果您不介意稍微更改模型,我建议在每个卷积层之后使用 Batchnorm。一般来说,使用 Batchnorm 更容易训练模型。

此外,经过一定次数的迭代后,你是否降低了学习率?在某些时候,太大的学习率可能不再减少你的训练误差。例如,ResNet 使用 0.1 的初始学习率训练 100 个时期,然后使用 0.01 训练另外 50 个时期,使用 0.001 训练另外 50 个时期。


0
投票
  1. y_train = keras.utils.to_categorical(y_train, 10)
  2. 将您的损失更改为
    tf.keras.losses.categorical_crossentropy
  3. 将 Adam 学习率降低至 0.001

然后你的代码就可以工作了。

© www.soinside.com 2019 - 2024. All rights reserved.