如何以监督方式训练自编码器?

问题描述 投票:0回答:1

我有两个数据集,一个是数字数据集,一个是数字数据集的语义信息。我想训练一个自动编码器来提供应该与语义数据集匹配的潜在嵌入。即:ae_model = Model(input = X_tr, target = [X_tr, S_tr]) 其中 S_tr 是应该与编码器输出或潜在嵌入相匹配的语义嵌入。

# Load the data
(x_train, y_train), (x_test,       y_test) =    tf.keras.datasets.mnist.load_data()

# Load the target embeddings
 target_embeddings = tf.keras.datasets.mnist.load_data()[1]

# Define the autoencoder
encoder = tf.keras.Sequential([
 tf.keras.layers.Flatten(input_shape=(28, 28)),
  tf.keras.layers.Dense(128, activation='relu'),
   tf.keras.layers.Dense(64, activation='relu'),

])

decoder = tf.keras.Sequential([
  tf.keras.layers.Dense(128, activation='relu'),
 tf.keras.layers.Dense(28 * 28, activation='sigmoid'),
 tf.keras.layers.Reshape((28, 28)),

])

ae_model = tf.keras.Model(encoder, decoder)

# Compile the autoencoder
   ae_model.compile(optimizer='adam', loss='mse')

# Train the autoencoder
ae_model.fit(x_train, target_embeddings, epochs=10)

我试过这个,但是它传递了 Target_embeddings 作为目标,我希望潜在嵌入与 target_embeddings 匹配,我该怎么做。

python machine-learning autoencoder supervised-learning
1个回答
0
投票

试试这个:

import tensorflow as tf

(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()

target_embeddings = tf.keras.datasets.mnist.load_data()[1]
# encoder block you provided
encoder = tf.keras.Sequential([
    tf.keras.layers.Flatten(input_shape=(28, 28)),
    tf.keras.layers.Dense(128, activation='relu'),
    tf.keras.layers.Dense(64, activation='relu')
])

# decoder block you provided
decoder = tf.keras.Sequential([
    tf.keras.layers.Dense(128, activation='relu'),
    tf.keras.layers.Dense(28 * 28, activation='sigmoid'),
    tf.keras.layers.Reshape((28, 28))
])

input_layer = tf.keras.layers.Input(shape=(28, 28))
latent_embedding = encoder(input_layer)
reconstructed_output = decoder(latent_embedding)

ae_model = tf.keras.Model(inputs=input_layer, outputs=[reconstructed_output, 
latent_embedding])

# We need to define loss for both encoder and decoder sides of the model:
# TODO: you can change these weights to get the best output 
loss_weights = {'reconstructed_output': 1.0, 'latent_embedding': 0.1}

ae_model.compile(optimizer='adam', loss={'reconstructed_output': 'mse', 
'latent_embedding': 'mse'}, loss_weights=loss_weights)

ae_model.fit(x_train, {'reconstructed_output': x_train, 'latent_embedding': 
target_embeddings}, epochs=10)
© www.soinside.com 2019 - 2024. All rights reserved.