使用经过训练的RNN进行预测时出错“已使用内核”

问题描述 投票:0回答:1

感谢您查看此问题!

我正在尝试培训LSTM网络,该网络根据过去30天的股票价格预测下一个5天的股票价格。我根据265个样本训练了模型。变量定义如下:

# Variables
x = tf.placeholder("float", [265, 30])
y = tf.placeholder("float", [265, 5])

weights = {
    'out': tf.Variable(tf.random_normal([n_hidden, y_size]))
    }

biases = {
    'out': tf.Variable(tf.random_normal([y_size]))
    }

模型如下所示:

# Define RNN architecture
def RNN(x, weights, biases):
    x_size = 30
    x = tf.reshape(x, [-1, x_size])
    x = tf.split(x, x_size, 1)

    rnn_cell = rnn.MultiRNNCell([rnn.BasicLSTMCell(n_hidden), rnn.BasicLSTMCell(n_hidden)])

    outputs, states = rnn.static_rnn(rnn_cell, x, dtype = tf.float32)

    return tf.matmul(outputs[-1], weights['out'] + biases['out'])

然后,我尝试使用训练模型预测如下:

y_pred = RNN(x_input, trained_weights, trained_biases)

其中x_input具有维度(1x30)。在给我一个我无法理解的错误列表:

ValueError: Variable rnn/multi_rnn_cell/cell_0/basic_lstm_cell/kernel already exists, disallowed. Did you mean to set reuse=True or reuse=tf.AUTO_REUSE in VarScope? Originally defined at:

  File "C:\Program Files\Python35\lib\site-packages\tensorflow\python\framework\ops.py", line 1654, in __init__
    self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access
  File "C:\Program Files\Python35\lib\site-packages\tensorflow\python\framework\ops.py", line 3290, in create_op
    op_def=op_def)
  File "C:\Program Files\Python35\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper
    op_def=op_def)

Traceback (most recent call last):
  File "C:\Users\teh.khoonkheng\Desktop\Others\Personal working folder\14. Projects\1. Oracle\Python\RNN_stock_01.py", line 135, in <module>
    y_test = RNN(x_test, trained_weights, trained_biases)
  File "C:\Users\teh.khoonkheng\Desktop\Others\Personal working folder\14. Projects\1. Oracle\Python\RNN_stock_01.py", line 81, in RNN
    outputs, states = rnn.static_rnn(rnn_cell, x, dtype = tf.float32)
  File "C:\Program Files\Python35\lib\site-packages\tensorflow\python\ops\rnn.py", line 1330, in static_rnn
    (output, state) = call_cell()
  File "C:\Program Files\Python35\lib\site-packages\tensorflow\python\ops\rnn.py", line 1317, in <lambda>
    call_cell = lambda: cell(input_, state)
  File "C:\Program Files\Python35\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py", line 191, in __call__
    return super(RNNCell, self).__call__(inputs, state)
  File "C:\Program Files\Python35\lib\site-packages\tensorflow\python\layers\base.py", line 714, in __call__
    outputs = self.call(inputs, *args, **kwargs)
  File "C:\Program Files\Python35\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py", line 1242, in call
    cur_inp, new_state = cell(cur_inp, cur_state)

我想知道我是否误解了static_rnn是如何工作的。我是否错误地设置了模型?我应该如何使用训练有素的RNN进行预测?

谢谢你的帮助!

python tensorflow lstm prediction rnn
1个回答
0
投票

就像错误说的那样,你需要提到reuse=True,以便学习状态可以在以后用于预测。做这个:

rnn_cell = rnn.MultiRNNCell([rnn.BasicLSTMCell(n_hidden,reuse=tf.AUTO_REUSE), rnn.BasicLSTMCell(n_hidden,reuse=tf.AUTO_REUSE)])

此外,这个模型看起来不对,因为,虽然你使用训练有素的weightsbiases,但你没有使用训练有素的LSTMcells。对于新输入,您要定义新的LSTMcells

© www.soinside.com 2019 - 2024. All rights reserved.