如何在keras层内使用SVD?

问题描述 投票:0回答:1

我的目的是在将潜伏层传递给自动编码器的解码模块之前,使用SVD对潜伏层进行PCA增白。我使用了tf.linalg.svd,但它不工作,因为它不包含必要的Keras参数。所以作为一个变通的办法,我试图把它包在Lambda里面,但是得到了这个错误的信息。

AttributeError: 'tuple' 对象没有属性'shape'。

我尝试了SO (E.g. 在Kerastensorflow的自定义图层中使用SVD。),并在Google上搜索Keras中的SVD,但找不到任何答案。我在这里附上了一段脱胎换骨但功能正常的代码。

import numpy as np
import tensorflow as tf
from sklearn import preprocessing
from keras.layers import Lambda, Input, Dense, Multiply, Subtract
from keras.models import Model
from keras import backend as K
from keras.losses import mse
from keras import optimizers
from keras.callbacks import EarlyStopping

x = np.random.randn(100, 5)
train_data = preprocessing.scale(x)

input_shape = (5, )
original_dim = train_data.shape[1]
intermediate_dim_1 = 64
intermediate_dim_2 = 16
latent_dim = 2
batch_size = 10
epochs = 15


# build encoder model
inputs = Input(shape=input_shape, name='encoder_input')
layer_1 = Dense(intermediate_dim_1, activation='tanh') (inputs)
layer_2 = Dense(intermediate_dim_2, activation='tanh') (layer_1)
encoded_layer = Dense(latent_dim, name='latent_layer') (layer_2)
encoder = Model(inputs, encoded_layer, name='encoder')
encoder.summary()


# build decoder model
latent_inputs = Input(shape=(latent_dim,))
layer_1 = Dense(intermediate_dim_1, activation='tanh') (latent_inputs)
layer_2 = Dense(intermediate_dim_2, activation='tanh') (layer_1)
outputs = Dense(original_dim,activation='sigmoid') (layer_2)
decoder = Model(latent_inputs, outputs, name='decoder')
decoder.summary()

# mean removal and pca whitening
meanX = Lambda(lambda x: tf.reduce_mean(x, axis=0, keepdims=True))(encoded_layer)
standardized = Subtract()([encoded_layer, meanX])

sigma2 = K.dot(K.transpose(standardized), standardized)
sigma2 = Lambda(lambda x: x / batch_size)(sigma2)


s, u ,v = tf.linalg.svd(sigma2,compute_uv=True)
# s ,u ,v = Lambda(lambda x: tf.linalg.svd(x,compute_uv=True))(sigma2)

epsilon = 1e-6
# sqrt of number close to 0 leads to problem hence replace it with epsilon
si = tf.where(tf.less(s, epsilon), tf.sqrt(1 / epsilon) * tf.ones_like(s),
              tf.math.truediv(1.0, tf.sqrt(s)))
whitening_layer = u @ tf.linalg.diag(si) @ tf.transpose(v)
whitened_encoding = K.dot(standardized, whitening_layer)

# Connect models
z_decoded = decoder(standardized)
# z_decoded = decoder(whitened_encoding)

# Define losses
reconstruction_loss = mse(inputs,z_decoded)

# Instantiate autoencoder
ae = Model(inputs, z_decoded, name='autoencoder')
ae.add_loss(reconstruction_loss)

# callback = EarlyStopping(monitor='val_loss', patience=5)
adam = optimizers.adam(learning_rate=0.002)
ae.compile(optimizer=adam)
ae.summary()
ae.fit(train_data, epochs=epochs, batch_size=batch_size,
       validation_split=0.2, shuffle=True)

要重现这个错误,请取消这几行的注释,并对前面的注释。

  1. z_decoded = decoder(whiteened_encoding)

  2. s ,u ,v = Lambda(lambda x: tf.linalg.svd(x,compute_uv=True))(sigma2)

如果有人能告诉我如何将SVD包在Keras层内,或者一个替代的实现,我将感激不尽.请注意,我没有包括计算损失的重参数化技巧,以保持代码的简单性.谢谢!

python numpy tensorflow keras svd
1个回答
0
投票

我解决了这个问题。要在Keras中使用SVD,我们需要使用Lambda层。然而,由于Lambda返回的是一个带有一些附加属性的张量,所以最好在lambda函数内部做附加工作并返回一个张量。我的代码还有一个问题是编码器和解码器模型的结合,我把编码器的输出和解码器模型的输入结合起来,解决了这个问题。工作代码如下。

import numpy as np
import tensorflow as tf
from sklearn import preprocessing
from keras.layers import Lambda, Input, Dense, Multiply, Subtract
from keras.models import Model
from keras import backend as K
from keras.losses import mse
from keras import optimizers
from keras.callbacks import EarlyStopping

def SVD(sigma2):
    s ,u ,v = tf.linalg.svd(sigma2,compute_uv=True)
    epsilon = 1e-6
    # sqrt of number close to 0 leads to problem hence replace it with epsilon
    si = tf.where(tf.less(s, epsilon), 
                  tf.sqrt(1 / epsilon) * tf.ones_like(s), 
                  tf.math.truediv(1.0, tf.sqrt(s)))
    whitening_layer = u @ tf.linalg.diag(si) @ tf.transpose(v)
    return whitening_layer

x = np.random.randn(100, 5)
train_data = preprocessing.scale(x)

input_shape = (5, )
original_dim = train_data.shape[1]
intermediate_dim_1 = 64
intermediate_dim_2 = 16
latent_dim = 2
batch_size = 10
epochs = 15


# build encoder model
inputs = Input(shape=input_shape, name='encoder_input')
layer_1 = Dense(intermediate_dim_1, activation='tanh') (inputs)
layer_2 = Dense(intermediate_dim_2, activation='tanh') (layer_1)
encoded_layer = Dense(latent_dim, name='latent_layer') (layer_2)
encoder = Model(inputs, encoded_layer, name='encoder')
encoder.summary()


# build decoder model
latent_inputs = Input(shape=(latent_dim,))
layer_1 = Dense(intermediate_dim_1, activation='tanh') (latent_inputs)
layer_2 = Dense(intermediate_dim_2, activation='tanh') (layer_1)
outputs = Dense(original_dim,activation='sigmoid') (layer_2)
decoder = Model(latent_inputs, outputs, name='decoder')
decoder.summary()

# mean removal and pca whitening
meanX = Lambda(lambda x: tf.reduce_mean(x, axis=0, keepdims=True))(encoded_layer)
standardized = Subtract()([encoded_layer, meanX])

sigma2 = K.dot(K.transpose(standardized), standardized)
sigma2 = Lambda(lambda x: x / batch_size)(sigma2)


# s, u ,v = tf.linalg.svd(sigma2,compute_uv=True)
whitening_layer = Lambda(SVD)(sigma2)
'''
s ,u ,v = Lambda(lambda x: tf.linalg.svd(x,compute_uv=True))(sigma2)

epsilon = 1e-6
# sqrt of number close to 0 leads to problem hence replace it with epsilon
si = tf.where(tf.less(s, epsilon), 
              tf.sqrt(1 / epsilon) * tf.ones_like(s), 
              tf.math.truediv(1.0, tf.sqrt(s)))
whitening_layer = u @ tf.linalg.diag(si) @ tf.transpose(v)
'''

print('whitening_layer shape=', np.shape(whitening_layer))
print('standardized shape=', np.shape(standardized))

whitened_encoding = K.dot(standardized, whitening_layer)

# Connect models
# z_decoded = decoder(standardized)
z_decoded = decoder(encoder(inputs))

# Define losses
reconstruction_loss = mse(inputs,z_decoded)

# Instantiate autoencoder
ae = Model(inputs, z_decoded, name='autoencoder')
ae.add_loss(reconstruction_loss)

# callback = EarlyStopping(monitor='val_loss', patience=5)
adam = optimizers.adam(learning_rate=0.002)
ae.compile(optimizer=adam)
ae.summary()
ae.fit(train_data, epochs=epochs, batch_size=batch_size,
       validation_split=0.2, shuffle=True)

运行代码的输出如下:

    Model: "encoder"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
encoder_input (InputLayer)   (None, 5)                 0         
_________________________________________________________________
dense_1 (Dense)              (None, 64)                384       
_________________________________________________________________
dense_2 (Dense)              (None, 16)                1040      
_________________________________________________________________
latent_layer (Dense)         (None, 2)                 34        
=================================================================
Total params: 1,458
Trainable params: 1,458
Non-trainable params: 0
_________________________________________________________________
Model: "decoder"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         (None, 2)                 0         
_________________________________________________________________
dense_3 (Dense)              (None, 64)                192       
_________________________________________________________________
dense_4 (Dense)              (None, 16)                1040      
_________________________________________________________________
dense_5 (Dense)              (None, 5)                 85        
=================================================================
Total params: 1,317
Trainable params: 1,317
Non-trainable params: 0
_________________________________________________________________
whitening_layer shape= (2, 2)
standardized shape= (None, 2)
/home/manish/anaconda3/envs/ica_gpu/lib/python3.7/site-packages/keras/engine/training_utils.py:819: UserWarning: Output decoder missing from loss dictionary. We assume this was done on purpose. The fit and evaluate APIs will not be expecting any data to be passed to decoder.
  'be expecting any data to be passed to {0}.'.format(name))
Model: "autoencoder"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
encoder_input (InputLayer)   (None, 5)                 0         
_________________________________________________________________
encoder (Model)              (None, 2)                 1458      
_________________________________________________________________
decoder (Model)              (None, 5)                 1317      
=================================================================
Total params: 2,775
Trainable params: 2,775
Non-trainable params: 0
_________________________________________________________________
Train on 80 samples, validate on 20 samples
Epoch 1/15
2020-05-16 16:01:55.443061: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
80/80 [==============================] - 0s 3ms/step - loss: 1.1739 - val_loss: 1.2238
Epoch 2/15
80/80 [==============================] - 0s 228us/step - loss: 1.0601 - val_loss: 1.0921
Epoch 3/15
80/80 [==============================] - 0s 261us/step - loss: 0.9772 - val_loss: 1.0291
Epoch 4/15
80/80 [==============================] - 0s 223us/step - loss: 0.9385 - val_loss: 0.9875
Epoch 5/15
80/80 [==============================] - 0s 262us/step - loss: 0.9105 - val_loss: 0.9560
Epoch 6/15
80/80 [==============================] - 0s 240us/step - loss: 0.8873 - val_loss: 0.9335
Epoch 7/15
80/80 [==============================] - 0s 217us/step - loss: 0.8731 - val_loss: 0.9156
Epoch 8/15
80/80 [==============================] - 0s 253us/step - loss: 0.8564 - val_loss: 0.9061
Epoch 9/15
80/80 [==============================] - 0s 273us/step - loss: 0.8445 - val_loss: 0.8993
Epoch 10/15
80/80 [==============================] - 0s 235us/step - loss: 0.8363 - val_loss: 0.8937
Epoch 11/15
80/80 [==============================] - 0s 283us/step - loss: 0.8299 - val_loss: 0.8874
Epoch 12/15
80/80 [==============================] - 0s 254us/step - loss: 0.8227 - val_loss: 0.8832
Epoch 13/15
80/80 [==============================] - 0s 227us/step - loss: 0.8177 - val_loss: 0.8789
Epoch 14/15
80/80 [==============================] - 0s 241us/step - loss: 0.8142 - val_loss: 0.8725
Epoch 15/15
80/80 [==============================] - 0s 212us/step - loss: 0.8089 - val_loss: 0.8679

希望能帮到你

© www.soinside.com 2019 - 2024. All rights reserved.