Tensorflow动态RNN - 形状

问题描述 投票:0回答:1

亲爱的程序员你好!

我有一个视频的多个帧,我想在我的RNN中获得尽可能多的层,因为我有帧,所以我可以为每个层提供一帧。

笔记: 框架形状= 224,224,3(但我把它展平) 每个视频的帧数= 20 =内层的数量

目前我得到了这个:

timesteps = 20
inner_layer_size = 100
output_layer_size = 2

sdev = 0.1

inputs = 224 * 224 * 3

x = tf.placeholder(tf.float32, shape=(None, timesteps, inputs), name="x")
y = tf.placeholder(tf.int32, shape=(None), name="y")

# Compute the layers
lstm_cell = tf.contrib.rnn.LSTMCell(num_units=inner_layer_size)
outputs, state = tf.nn.dynamic_rnn(cell=lstm_cell, dtype=tf.float32, inputs=x)

Wz = tf.get_variable(name="Wz", shape=(inner_layer_size, output_layer_size),
                         initializer=tf.truncated_normal_initializer(stddev=sdev))
bz = tf.get_variable(name="bz", shape=(1, output_layer_size),
                         initializer=tf.constant_initializer(0.0))

logits = tf.matmul(state, Wz) + bz
prediction = tf.nn.softmax(logits)

我知道这不是我想要的方式。如果你在第一张图片上看here,很清楚每层的输入是框架的一部分,而不是整个框架的一部分。

我现在的问题是如何改变这一点以及如何调整我的'W'和'b'呢?谢谢你抽出宝贵的时间:)

tensorflow rnn
1个回答
0
投票

问题是你将LSTM的state传递给密集层而不是outputs

您的案例中的输出将是[None, 68, 100]。您需要拆分time_steps然后将其传递到密集层。这可以通过以下代码实现:

# LSTM output
lstm_cell = tf.contrib.rnn.LSTMCell(num_units=inner_layer_size)
outputs, state = tf.nn.dynamic_rnn(cell=lstm_cell, dtype=tf.float32, inputs=x)

#Split the outputs across time_steps.
lstm_sequence = tf.split(outputs, tf.ones((timesteps), dtype=tf.int32 ), 1)

#Dense layer to be applied for each time steps.
def dense(inputs, reuse=False):   
   with tf.variable_scope('MLP', reuse=reuse):
      Wz = tf.get_variable(name="Wz", shape=(inner_layer_size, output_layer_size),
                  initializer=tf.truncated_normal_initializer(stddev=sdev))
      bz = tf.get_variable(name="bz", shape=(1, output_layer_size),
                     initializer=tf.constant_initializer(0.0))

      logits = tf.matmul(inputs, Wz) + bz
      prediction = tf.nn.softmax(logits)
      return prediction

# Pass each time step outputs of the LSTM to the dense layer. 
#The layer should have shared weights    
out = []
for i, frame in enumerate(lstm_sequence):
   if i == 0:
      out.append(dense(tf.reshape(frame, [-1, inner_layer_size])))
   else:
      out.append(dense(tf.reshape(frame, [-1, inner_layer_size]),reuse=True))
© www.soinside.com 2019 - 2024. All rights reserved.