将Keras与HDF5Matrix一起使用仅带标签

问题描述 投票:3回答:1

我相信这是Stack Overflow中的第一个问题,所以如果我不遵守所有指南,我会提前道歉。我最近开始使用Keras进行深度学习,并且由于我使用h5py处理大型数据集的HDF5文件,我搜索了一种在非常大的HDF5文件上使用keras训练模型的方法。我发现最常见的方法是使用keras.utils.io_utils中的HDF5Matrix。

我修改了一个Keras例子(mnist.cnn)如下:

'''Trains a simple convnet on the MNIST dataset.

Gets to 99.25% test accuracy after 12 epochs
(there is still a lot of margin for parameter tuning).
16 seconds per epoch on a GRID K520 GPU.
'''

from __future__ import print_function
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras import backend as K

# My Imports
from os.path import exists
import h5py
from keras.utils.io_utils import HDF5Matrix
batch_size = 128
num_classes = 10
epochs = 12

# input image dimensions
img_rows, img_cols = 28, 28

# the data, shuffled and split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()

if K.image_data_format() == 'channels_first':
    x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
    x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
    input_shape = (1, img_rows, img_cols)
else:
    x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
    x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
    input_shape = (img_rows, img_cols, 1)

x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')

# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)

#-----------------------------------HDF5 files creation---------------------------------------
sample_file_name = "x.hdf5"
solution_file_name = "y.hdf5"
train_name = "train"
test_name = "test"

#Create dataset
if (not exists(sample_file_name)) and (not exists(solution_file_name)):
    samples_file = h5py.File(sample_file_name,mode='a')
    solutions_file = h5py.File(solution_file_name,mode='a')
    samples_train = samples_file.create_dataset(train_name,data=x_train)
    samples_test = samples_file.create_dataset(test_name, data=x_test)
    solution_train = solutions_file.create_dataset(train_name, data=y_train)
    solution_test = solutions_file.create_dataset(test_name, data=y_test)
    samples_file.flush()
    samples_file.close()
    solutions_file.flush()
    solutions_file.close()

x_train = HDF5Matrix(sample_file_name,train_name)
x_test = HDF5Matrix(sample_file_name,test_name)
y_train = HDF5Matrix(solution_file_name,train_name)
y_test = HDF5Matrix(solution_file_name,test_name)
#---------------------------------------------------------------------------------------------

model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
                 activation='relu',
                 input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))

model.compile(loss=keras.losses.categorical_crossentropy,
              optimizer=keras.optimizers.Adadelta(),
              metrics=['accuracy'])

# If using HDF5Matrix one needs to disable shuffle
model.fit(x_train, y_train,
          batch_size=batch_size,
          epochs=epochs,
          verbose=1,
          validation_data=(x_test, y_test),
          shuffle=False)

score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])

但是,有些事情让我担忧。在分段问题\多类问题中,类的数量非常大,以分类格式保存解决方案是非常浪费的。此外,这样做意味着一旦添加新类,就应该相应地更改整个数据集。这就是为什么我认为使用HDF5Matrix的标准化功能如下:

'''Trains a simple convnet on the MNIST dataset.

Gets to 99.25% test accuracy after 12 epochs
(there is still a lot of margin for parameter tuning).
16 seconds per epoch on a GRID K520 GPU.
'''

from __future__ import print_function
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras import backend as K

# My Imports
from os.path import exists
import h5py
from keras.utils.io_utils import HDF5Matrix
batch_size = 128
num_classes = 10
epochs = 12

# input image dimensions
img_rows, img_cols = 28, 28

# the data, shuffled and split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()

if K.image_data_format() == 'channels_first':
    x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
    x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
    input_shape = (1, img_rows, img_cols)
else:
    x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
    x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
    input_shape = (img_rows, img_cols, 1)

x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')

#-----------------------------------HDF5 files creation---------------------------------------
sample_file_name = "x.hdf5"
solution_file_name = "y.hdf5"
train_name = "train"
test_name = "test"

#Create dataset
if (not exists(sample_file_name)) and (not exists(solution_file_name)):
    samples_file = h5py.File(sample_file_name,mode='a')
    solutions_file = h5py.File(solution_file_name,mode='a')
    samples_train = samples_file.create_dataset(train_name,data=x_train)
    samples_test = samples_file.create_dataset(test_name, data=x_test)
    solution_train = solutions_file.create_dataset(train_name, data=y_train)
    solution_test = solutions_file.create_dataset(test_name, data=y_test)
    samples_file.flush()
    samples_file.close()
    solutions_file.flush()
    solutions_file.close()

x_train = HDF5Matrix(sample_file_name,train_name)
x_test = HDF5Matrix(sample_file_name,test_name)
y_train = HDF5Matrix(solution_file_name,train_name,normalizer=lambda solution: keras.utils.to_categorical(solution,num_classes))
y_test = HDF5Matrix(solution_file_name,test_name,normalizer=lambda solution: keras.utils.to_categorical(solution,num_classes))
#---------------------------------------------------------------------------------------------

model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
                 activation='relu',
                 input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))

model.compile(loss=keras.losses.categorical_crossentropy,
              optimizer=keras.optimizers.Adadelta(),
              metrics=['accuracy'])

# If using HDF5Matrix one needs to disable shuffle
model.fit(x_train, y_train,
          batch_size=batch_size,
          epochs=epochs,
          verbose=1,
          validation_data=(x_test, y_test),
          shuffle=False)

score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])

但是,这会产生一个错误,意味着解决方案的形状应该匹配,并且不应该以这种方式使用normalizer:

ValueError: Error when checking target: expected dense_2 to have 2, but got array with shape (60000, 1, 10)

那么,有没有办法将数据保存在HDF5中(如果不可能,使用其他格式),并以保存标签(而不是分类矢量)的方式使用Keras而不将其转化为回归问题?

python deep-learning keras hdf5 h5py
1个回答
1
投票

由于these线,你得到这个错误。

Keras在训练前检查输入形状。问题是如果你调用.shape,HDF5Matrix将返回预标准化的形状,然后Keras会相信你有一个(60000)数组用于y_train和一个(10000,)用于y_test

然而,当访问矩阵的切片时,应用归一化器,使得例如y_train[5:7].shape确实具有最终预期形状:(2,10)。

这主要是因为实际上并不期望规范化器改变形状,但Keras确实可以处理这种情况。

您可以使用fit_generator而不是fit来修复它,以便训练只能看到规范化数据:

def generator(features, labels, size):
    while True:
        start, end = 0, size
        while end < len(features):
            s = slice(start, end)
            # you can actually do the normalization here if you want
            yield features[s], labels[s]
            start, end = end, end + size

model.fit_generator(
    generator(x_train, y_train, batch_size),
    steps_per_epoch=len(x_train) // batch_size,
    epochs=1,
    verbose=1, 
    validation_data=generator(x_test, y_test, batch_size),
    validation_steps=len(x_test) // batch_size,
    shuffle=False)

请注意,您可以在生成器函数内执行任何类型的规范化,这对Keras是透明的。您可以使用不同的批量大小进行训练和验证。

此外,您必须以相同的方式更改评估:

score = model.evaluate_generator(
    generator(x_test, y_test, batch_size),
    steps=len(x_test) // batch_size)

顺便说一句,我认为使用规范化器的解决方案是一个好主意。

© www.soinside.com 2019 - 2024. All rights reserved.