Tensorflow:如何将前一时间步的输出作为输入传递到下一个时间步

问题描述 投票:11回答:3

这是How can I feed last output y(t-1) as input for generating y(t) in tensorflow RNN?这个问题的副本

我想在时间步长T传递RNN的输出作为时间步长T + 1的输入。 input_RNN(T+1) = output_RNN(T)根据文档,tf.nn.rnn以及tf.nn.dynamic_rnn函数显式地将完整输入带到所有时间步。

我在https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/seq2seq.py检查了seq2seq示例它使用了一个循环并调用了单元格(输入,状态)函数。细胞可以是lstm或gru或任何其他rnn细胞。我检查了文档以找到cell()的参数的数据类型和形状,但我发现只有表单单元格的构造函数(num_neurons)。我想知道将输出传递给输入的正确方法。我不想使用像tensorflow构建的keras之类的其他库/包装器。有什么建议?

python tensorflow recurrent-neural-network
3个回答
2
投票

一种方法是将您自己的RNN小区与您自己的多RNN小区一起编写。这样,您可以在内部存储最后一个RNN单元的输出,并在下一个时间步骤中访问它。查看此blogpost了解更多信息。你也可以添加例如编码器或解码器直接在单元格中,以便您可以在将数据提供给单元格之前或从单元格中检索数据之后处理数据。

另一种可能性是使用tf.nn.raw_rnn函数,它可以控制在调用RNN单元格之前和之后发生的事情。以下代码片段显示了如何使用此函数,将信用转到this article

from tensorflow.python.ops.rnn import _transpose_batch_time
import tensorflow as tf


def sampling_rnn(self, cell, initial_state, input_, seq_lengths):

    # raw_rnn expects time major inputs as TensorArrays
    max_time = ...  # this is the max time step per batch
    inputs_ta = tf.TensorArray(dtype=tf.float32, size=max_time, clear_after_read=False)
    inputs_ta = inputs_ta.unstack(_transpose_batch_time(input_))  # model_input is the input placeholder
    input_dim = input_.get_shape()[-1].value  # the dimensionality of the input to each time step
    output_dim = ...  # the dimensionality of the model's output at each time step

        def loop_fn(time, cell_output, cell_state, loop_state):
            """
            Loop function that allows to control input to the rnn cell and manipulate cell outputs.
            :param time: current time step
            :param cell_output: output from previous time step or None if time == 0
            :param cell_state: cell state from previous time step
            :param loop_state: custom loop state to share information between different iterations of this loop fn
            :return: tuple consisting of
              elements_finished: tensor of size [bach_size] which is True for sequences that have reached their end,
                needed because of variable sequence size
              next_input: input to next time step
              next_cell_state: cell state forwarded to next time step
              emit_output: The first return argument of raw_rnn. This is not necessarily the output of the RNN cell,
                but could e.g. be the output of a dense layer attached to the rnn layer.
              next_loop_state: loop state forwarded to the next time step
            """
            if cell_output is None:
                # time == 0, used for initialization before first call to cell
                next_cell_state = initial_state
                # the emit_output in this case tells TF how future emits look
                emit_output = tf.zeros([output_dim])
            else:
                # t > 0, called right after call to cell, i.e. cell_output is the output from time t-1.
                # here you can do whatever ou want with cell_output before assigning it to emit_output.
                # In this case, we don't do anything
                next_cell_state = cell_state
                emit_output = cell_output  

            # check which elements are finished
            elements_finished = (time >= seq_lengths)
            finished = tf.reduce_all(elements_finished)

            # assemble cell input for upcoming time step
            current_output = emit_output if cell_output is not None else None
            input_original = inputs_ta.read(time)  # tensor of shape (None, input_dim)

            if current_output is None:
                # this is the initial step, i.e. there is no output from a previous time step, what we feed here
                # can highly depend on the data. In this case we just assign the actual input in the first time step.
                next_in = input_original
            else:
                # time > 0, so just use previous output as next input
                # here you could do fancier things, whatever you want to do before passing the data into the rnn cell
                # if here you were to pass input_original than you would get the normal behaviour of dynamic_rnn
                next_in = current_output

            next_input = tf.cond(finished,
                                 lambda: tf.zeros([self.batch_size, input_dim], dtype=tf.float32),  # copy through zeros
                                 lambda: next_in)  # if not finished, feed the previous output as next input

            # set shape manually, otherwise it is not defined for the last dimensions
            next_input.set_shape([None, input_dim])

            # loop state not used in this example
            next_loop_state = None
            return (elements_finished, next_input, next_cell_state, emit_output, next_loop_state)

    outputs_ta, last_state, _ = tf.nn.raw_rnn(cell, loop_fn)
    outputs = _transpose_batch_time(outputs_ta.stack())
    final_state = last_state

    return outputs, final_state

作为旁注:目前尚不清楚在培训期间依赖模型的输出是否是一个好主意。特别是在开始时,模型的输出可能非常糟糕,因此您的训练可能永远不会收敛或者可能无法学到任何有意义的东西。


0
投票

与网络层一起定义init_state:

init_state = tf.placeholder(tf.float32, [batch_size,hidden])
basic_cell = tf.contrib.rnn.BasicRNNCell(num_units = hidden)
state_series, current_state = tf.nn.dynamic_rnn(basic_cell, x, dtype=tf.float32, initial_state = init_state)

然后在你身边training_steps_loop初始化零状态:

 _init_state = np.zeros([batch_size,hidden], dtype=np.float32)

在training_steps_loop中运行会话并将_init_state放在feed_dict中,并将返回的_current_state返回到新的_init_state以进行下一步:

_training_op, _state_series, _current_state = sess.run(
                [training_op, state_series, current_state],  feed_dict={x: xdb, y: ydb, init_state:_init_state})

_init_state = _current_state

0
投票

我认为一个棘手的方法是使用tf.contrib.seq2seq.InferenceHelper,因为这个帮助器可以将输出状态传递给下一个时间步输入,如this issuethis question讨论的那样。这是我自己的代码(灵感来自this question),它有效:

"""
construct Decoder
"""
cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))

# should use a start token both training and inferring process
start_tokens = tf.tile(tf.constant([START_ARRAY], dtype=tf.float32), [BATCH_SIZE, 1], name='start_tokens')

# training decoder
with tf.variable_scope("decoder"):
    # below construct a helper that pass output to next timestep
    training_helper = tf.contrib.seq2seq.InferenceHelper(
        sample_fn=lambda outputs: outputs,
        sample_shape=[decoder_hidden_units],
        sample_dtype=tf.float32,
        start_inputs=start_tokens,
        end_fn=lambda sample_ids: False)

    training_decoder = tf.contrib.seq2seq.BasicDecoder(cell, training_helper,
                                                       initial_state=cell.zero_state(dtype=tf.float32,
                                                                                     batch_size=[BATCH_SIZE]).
                                                       clone(cell_state=encoder_state))

    training_decoder_output, _, _ = tf.contrib.seq2seq.dynamic_decode(training_decoder,
                                                                      impute_finished=True,
                                                                      maximum_iterations=max_iters)

并且解码器的预测版本与此训练解码器相同,您可以直接推断。

© www.soinside.com 2019 - 2024. All rights reserved.