如何调整TFRecordDataset的形状以符合Model API?

问题描述 投票:1回答:1

我正在基于this code for noise suppression建立模型。我的香草实现的问题在于,它会一次加载所有数据,当训练数据变得非常大时,这不是最好的主意。我的输入文件在链接代码中表示为training.h5,已超过30 GB。

我决定改用tf.data接口,该接口应允许我处理大型数据集;我的问题是我不知道如何正确调整TFRecordDataset的形状,使其无法满足Model API的要求。

如果您选择model.fit(x_train, [y_train, vad_train],则基本上需要以下内容:

  • x_train,形状[nb_sequences, window, 42]
  • y_train,形状[nb_sequences, window, 22]
  • vad_train,形状[nb_sequences, window, 1]

window通常会修复(在代码中为2000),因此唯一的变量nb_sequences取决于数据集的大小。但是,对于tf.data,我们不提供xy,而仅提供x(请参阅Model API docs)。

将tfrecord保存到文件

为了使代码可重复,我使用以下代码创建了输入文件:

writer = tf.io.TFRecordWriter(path='example.tfrecord')
for record in data:
    feature = {}
    feature['X'] = tf.train.Feature(float_list=tf.train.FloatList(value=record[:42]))
    feature['y'] = tf.train.Feature(float_list=tf.train.FloatList(value=record[42:64]))
    feature['vad'] = tf.train.Feature(float_list=tf.train.FloatList(value=[record[64]]))
    example = tf.train.Example(features=tf.train.Features(feature=feature))
    serialized = example.SerializeToString()
    writer.write(serialized)
writer.close()

data是形状为[10000, 65]的训练数据。我的example.tfrecord可用here。它是3 MB,实际上是30 GB +。

您可能会注意到,在链接的代码中,numpy数组的形状为[x, 87],而我的是[x, 65]。可以-其余的地方都不会用。

使用tf.data.TFRecordDataset加载数据集

我想使用tf.data通过一些预取来“按需”加载数据,无需将所有数据都保留在内存中。我的尝试:

import datetime
import numpy as np
import h5py
import tensorflow as tf
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import GRU
from tensorflow.keras import regularizers
from tensorflow.keras.constraints import Constraint
from tensorflow.keras.callbacks import ModelCheckpoint
from tensorflow.keras import backend as K
from tensorflow.keras.layers import concatenate

def load_dataset(path):
    def _parse_function(example_proto):
        keys_to_features = {
            'X': tf.io.FixedLenFeature([42], tf.float32),
            'y': tf.io.FixedLenFeature([22], tf.float32),
            'vad': tf.io.FixedLenFeature([1], tf.float32)
        }
        features = tf.io.parse_single_example(example_proto, keys_to_features)
        return (features['X'], (features['y'], features['vad']))

    dataset = tf.data.TFRecordDataset(path).map(_parse_function)
    return dataset


def my_crossentropy(y_true, y_pred):
    return K.mean(2 * K.abs(y_true - 0.5) * K.binary_crossentropy(y_pred, y_true), axis=-1)


def mymask(y_true):
    return K.minimum(y_true + 1., 1.)


def msse(y_true, y_pred):
    return K.mean(mymask(y_true) * K.square(K.sqrt(y_pred) - K.sqrt(y_true)), axis=-1)


def mycost(y_true, y_pred):
    return K.mean(mymask(y_true) * (10 * K.square(K.square(K.sqrt(y_pred) - K.sqrt(y_true))) + K.square(
        K.sqrt(y_pred) - K.sqrt(y_true)) + 0.01 * K.binary_crossentropy(y_pred, y_true)), axis=-1)


def my_accuracy(y_true, y_pred):
    return K.mean(2 * K.abs(y_true - 0.5) * K.equal(y_true, K.round(y_pred)), axis=-1)


class WeightClip(Constraint):
    '''Clips the weights incident to each hidden unit to be inside a range
    '''

    def __init__(self, c=2.0):
        self.c = c

    def __call__(self, p):
        return K.clip(p, -self.c, self.c)

    def get_config(self):
        return {'name': self.__class__.__name__,
                'c': self.c}

def build_model():
    reg = 0.000001
    constraint = WeightClip(0.499)
    main_input = Input(shape=(None, 42), name='main_input')
    tmp = Dense(24, activation='tanh', name='input_dense', kernel_constraint=constraint, bias_constraint=constraint)(
        main_input)
    vad_gru = GRU(24, activation='tanh', recurrent_activation='sigmoid', return_sequences=True, name='vad_gru',
                  kernel_regularizer=regularizers.l2(reg), recurrent_regularizer=regularizers.l2(reg),
                  kernel_constraint=constraint, recurrent_constraint=constraint, bias_constraint=constraint)(tmp)
    vad_output = Dense(1, activation='sigmoid', name='vad_output', kernel_constraint=constraint,
                       bias_constraint=constraint)(vad_gru)
    noise_input = concatenate([tmp, vad_gru, main_input])
    noise_gru = GRU(48, activation='relu', recurrent_activation='sigmoid', return_sequences=True, name='noise_gru',
                    kernel_regularizer=regularizers.l2(reg), recurrent_regularizer=regularizers.l2(reg),
                    kernel_constraint=constraint, recurrent_constraint=constraint, bias_constraint=constraint)(noise_input)
    denoise_input = concatenate([vad_gru, noise_gru, main_input])

    denoise_gru = GRU(96, activation='tanh', recurrent_activation='sigmoid', return_sequences=True, name='denoise_gru',
                      kernel_regularizer=regularizers.l2(reg), recurrent_regularizer=regularizers.l2(reg),
                      kernel_constraint=constraint, recurrent_constraint=constraint, bias_constraint=constraint)(
        denoise_input)

    denoise_output = Dense(22, activation='sigmoid', name='denoise_output', kernel_constraint=constraint,
                           bias_constraint=constraint)(denoise_gru)

    model = Model(inputs=main_input, outputs=[denoise_output, vad_output])

    model.compile(loss=[mycost, my_crossentropy],
                  metrics=[msse],
                  optimizer='adam', loss_weights=[10, 0.5])
    return model

model = build_model()
dataset = load_dataset('example.tfrecord')

我的数据集现在具有以下形状:

<MapDataset shapes: ((42,), ((22,), (1,))), types: (tf.float32, (tf.float32, tf.float32))>

我认为这是Model API期望的(破坏者:不是)。

model.fit(dataset.batch(10))

给出以下错误:

ValueError: Error when checking input: expected main_input to have 3 dimensions, but got array with shape (None, 42)

有道理,我这里没有window。同时,似乎没有按Model(inputs=main_input, outputs=[denoise_output, vad_output])预期获得正确的形状。

如何修改load_dataset,使其与tf.data的模型API期望的匹配?

python tensorflow machine-learning deep-learning tensorflow-datasets
1个回答
0
投票

鉴于您的模型具有1个输入和2个输出,因此tf.data.Dataset应该具有两个条目:1)形状为(window, 42)的输入数组2)两个数组的元组,每个数组的形状为(window, 22)(window, 1)

假设您从形状为(42,)(22,)(1,)的三个元素元组开始,这可以通过两次重塑操作来实现:

dataset = load_dataset('example.tfrecord')

# I assumed 1. Can be anything else.
window_size = 1

# Perform batching twice. 
# One to restore window and second to make actual batches:
dataset = dataset.batch(window_size).batch(10)

# Change output format
def custom_reshape(x, y, vad):
  return x, (y, vad)

dataset = dataset.map(custom_reshape)

在应用上述操作并调用model.fit之后,我设法为您的模型提供了一个训练步骤。简而言之,您可以致电:model.fit(dataset.batch(window_size).batch(10).map(custom_reshape)它也应该起作用。

祝你好运。

© www.soinside.com 2019 - 2024. All rights reserved.