自动编码器整形问题

问题描述 投票:0回答:1

我的自动编码器有问题,因为我错误地调整了输出。目前自动编码器的编码如下。

我遇到了这个错误:

ValueError:尺寸必须相等,但为 2000 和 3750 '{{节点mean_absolute_error/sub}} = Sub[T=DT_FLOAT](sequential_8/sequential_7/conv1d_transpose_14/BiasAdd, IteratorGetNext:1)',输入形状:[?,2000,3], [?,3750,3]。

如果可能的话,有人可以帮助调整架构吗?我似乎忘记了最初为此调整所做的原始修改。

import tensorflow as tf
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Conv1D, MaxPooling1D, UpSampling1D, concatenate
from tensorflow.keras.callbacks import EarlyStopping

# Provided encoder
encoder = tf.keras.models.Sequential([
    tf.keras.layers.Reshape([3750, 3], input_shape=[3750, 3]),
    tf.keras.layers.Conv1D(32, kernel_size=5, padding="same", activation="relu"),
    tf.keras.layers.MaxPool1D(pool_size=2),
    tf.keras.layers.Conv1D(64, kernel_size=5, padding="same", activation="relu"),
    tf.keras.layers.MaxPool1D(pool_size=2),
    tf.keras.layers.Conv1D(128, kernel_size=5, padding="same", activation="relu"),
    tf.keras.layers.MaxPool1D(pool_size=2),
    tf.keras.layers.Conv1D(256, kernel_size=5, padding="same", activation="relu"),
    tf.keras.layers.MaxPool1D(pool_size=2),
    tf.keras.layers.Conv1D(512, kernel_size=5, padding="same", activation="relu"),
    tf.keras.layers.MaxPool1D(pool_size=2),
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dense(512)
])

#latent space

decoder = tf.keras.models.Sequential([
    tf.keras.layers.Dense(512 * 125, input_shape=[512]),
    tf.keras.layers.Reshape([125, 512]),
    tf.keras.layers.Conv1DTranspose(512, kernel_size=5, strides=1, padding="same", activation="relu"),
    tf.keras.layers.UpSampling1D(size=2),
    tf.keras.layers.Conv1DTranspose(256, kernel_size=5, strides=1, padding="same", activation="relu"),
    tf.keras.layers.UpSampling1D(size=2),
    tf.keras.layers.Conv1DTranspose(128, kernel_size=5, strides=1, padding="same", activation="relu"),
    tf.keras.layers.UpSampling1D(size=2),
    tf.keras.layers.Conv1DTranspose(64, kernel_size=5, strides=1, padding="same", activation="relu"),
    tf.keras.layers.UpSampling1D(size=2),
    # Adjust the kernel size and padding to match the input shape
    tf.keras.layers.Conv1DTranspose(3, kernel_size=5, strides=1, padding="same", activation="linear")
])

# Add more layers with larger kernel sizes to both encoder and decoder.
ae = tf.keras.models.Sequential([encoder, decoder])

ae.compile(
    loss="mean_squared_error", 
    optimizer=tf.keras.optimizers.Adam(learning_rate=0.00001)
)
# Define the early stopping criteria
early_stopping = EarlyStopping(monitor='val_loss', patience=30, mode='min')

history = ae.fit(X_train, X_train, batch_size=8, epochs=150, validation_data=(X_val, X_val), callbacks=[early_stopping])    ```
python tensorflow machine-learning keras autoencoder
1个回答
0
投票

由于错误表明自动编码器中存在尺寸不匹配,因此在计算输入和输出图像的均方误差时会导致错误。您可以使用 model.summary() 检查模型的输出形状,这会产生以下结果:

一个快速修复方法是添加填充层,例如您可以使用以下解码器架构:

decoder = tf.keras.models.Sequential([
    tf.keras.layers.Dense(512 * 125, input_shape=[512]),
    tf.keras.layers.Reshape([125, 512]),
    tf.keras.layers.Conv1DTranspose(512, kernel_size=5, strides=1, padding="same", activation="relu"),
    tf.keras.layers.UpSampling1D(size=2),
    tf.keras.layers.Conv1DTranspose(256, kernel_size=5, strides=1, padding="same", activation="relu"),
    tf.keras.layers.UpSampling1D(size=2),
    tf.keras.layers.Conv1DTranspose(128, kernel_size=5, strides=1, padding="same", activation="relu"),
    tf.keras.layers.UpSampling1D(size=2),
    tf.keras.layers.Conv1DTranspose(64, kernel_size=5, strides=1, padding="same", activation="relu"),
    tf.keras.layers.UpSampling1D(size=2),
    tf.keras.layers.ZeroPadding1D(padding=875),
    # Adjust the kernel size and padding to match the input shape
    tf.keras.layers.Conv1DTranspose(3, kernel_size=5, strides=1, padding="same", activation="linear")
])

备注: 最好在层之间添加这些嵌入,这样您就不会在一层中用大量零淹没模型。

© www.soinside.com 2019 - 2024. All rights reserved.