Keras LSTM 模型 - 预计有 3 个维度,但得到了 2 个维度的数组

问题描述 投票:0回答:3

目前正在使用 Keras 开发金融时间序列 LSTM 模型,并遇到了这个问题。

我的代码似乎正在生成一个具有 2 维的节点,其中预计为 3 维,这是代码,

import pandas as pd
import numpy as np
from tensorflow.python.keras.models import Sequential
from tensorflow.python.keras.layers import Input, Dense, GRU, Embedding, LSTM, Flatten
from tensorflow.python.keras.optimizers import RMSprop
from tensorflow.python.keras.callbacks import EarlyStopping, 
ModelCheckpoint, TensorBoard, ReduceLROnPlateau


batch_size = 500
feature_no = 13
period_no = 8640
def gen(batch_size, periods):
j = 0

features = ['ask_close',
            'ask_open',
            'ask_high',
            'ask_low',
            'bid_close',
            'bid_open',
            'bid_high',
            'bid_low',
            'open',
            'high',
            'low',
            'close',
            'price']
with pd.HDFStore('datasets/eurusd.h5') as store:
    df = store['train_buy']

x_shape = (batch_size, periods, len(features))
x_batch = np.zeros(shape = x_shape, dtype=np.float16)

y_shape = (batch_size, periods)
y_batch = np.zeros(shape = y_shape, dtype=np.float16)

while True:
    i = 0
    while len(x_batch) < batch_size:
        if df.iloc[j+periods]['direction'].values == 1:
            x_batch[i] = df.iloc[j:j+periods][features].values.tolist()
            y_batch[i] = df.iloc[j+periods]['target_buy'][0].round(4)
            i+=1
        j+=1
        if j == 56241737 - periods:
            j = 0
    yield x_batch, y_batch

generator = gen(batch_size, period_no)

model = Sequential()
model.add(LSTM(units = 1, return_sequences=True, input_shape = (None, feature_no,)))


optimizer = RMSprop(lr=1e-3)
model.compile(loss = 'mse', optimizer = optimizer)
model.fit_generator(generator=generator, epochs = 10, steps_per_epoch = 112483)

这是错误:

Traceback (most recent call last):
model.fit_generator(generator=generator, epochs = 10, steps_per_epoch = 112483)
File "C:\Users\Seok\Anaconda3\lib\site-packages\tensorflow\python\keras\_impl\keras\models.py", line 1198, in fit_generator
initial_epoch=initial_epoch)
File "C:\Users\Seok\Anaconda3\lib\site-packages\tensorflow\python\keras\_impl\keras\engine\training.py", line 2345, in fit_generator
x, y, sample_weight=sample_weight, class_weight=class_weight)
File "C:\Users\Seok\Anaconda3\lib\site-packages\tensorflow\python\keras\_impl\keras\engine\training.py", line 1981, in train_on_batch
check_batch_axis=True)
File "C:\Users\Seok\Anaconda3\lib\site-packages\tensorflow\python\keras\_impl\keras\engine\training.py", line 1514, in _standardize_user_data
exception_prefix='target')
File "C:\Users\Seok\Anaconda3\lib\site-packages\tensorflow\python\keras\_impl\keras\engine\training.py", line 139, in _standardize_input_data
'with shape ' + str(data_shape))
ValueError: Error when checking target: expected lstm_1 to have 3 dimensions, but got array with shape (500, 8640)

我在gibhub上看到过类似的问题,他们似乎已经解决了这个问题,但是那里的解决方案似乎不适用于这个问题。

python tensorflow keras
3个回答
1
投票

LSTM(和 GRU)层需要 3 维输入:批量大小、时间步数和特征数。

input_shape
术语中,它们被指定为
(batch size, time steps, no. of features)
。 所以只要看一下你的代码,你就应该改变

model.add(LSTM(units = 1, return_sequences=True, input_shape = (None, feature_no,)))

model.add(LSTM(units = 1, return_sequences=True, input_shape = (batch_size, periods, len(features)))

编辑:我的错误,input_shape 未指定为 3 维数组,但期望 3 维数组作为模型的输入。

我相信这里的错误实际上是由输出形状引起的。对于

return_sequences = True
,LSTM 的输出具有形状
(batch_size, timesteps, units)
,因此生成器应该生成
y_batch
形状为
(batch_size, periods, 1)

的数组

0
投票

找到了解决方案 - 1,正如 Platinum95 所说,LSTM 层上的 return_sequences 选项仅应在将节点传递到另一个 LSTM 层时使用,

此外,生成器中 y_batch 的形状被调用为错误。它应该是形状(batch_size)


0
投票

这是关于模型的本质,像 RNN、LSTM 和 GRU 这样的顺序模型需要 3D,而不是像非顺序模型 LR RF 和 SVM。

© www.soinside.com 2019 - 2024. All rights reserved.