我正在尝试在 keras 中将深度学习与 LSTM 结合使用。 我使用一些信号作为输入 (
nb_sig
),这些信号在训练期间可能会发生变化,样本数量是固定的 (nb_sample
)
我想做参数识别,所以我的输出层就是我参数个数的大小(nb_param
)
所以我创建了大小为 (
nb_sig
x nb_sample
) 和标签 (nb_param
x nb_sample
) 的训练集
我的问题是我找不到深度学习模型的正确维度。 我试过这个:
import numpy as np
from keras.models import Sequential
from keras.layers import Dense, LSTM
nb_sample = 500
nb_sig = 100 # number that may change during the training
nb_param = 10
train = np.random.rand(nb_sig,nb_sample)
label = np.random.rand(nb_param,nb_sample)
DLmodel = Sequential()
DLmodel.add(LSTM(units=100, return_sequences=True, input_shape=(None, nb_sample), activation='tanh'))
DLmodel.add(Dense(nb_param, activation="linear", kernel_initializer="uniform"))
DLmodel.compile(loss='mean_squared_error', optimizer='RMSprop', metrics=['accuracy', 'mse'], run_eagerly=True)
print(DLmodel.summary())
DLmodel.fit(train, label, epochs=10, batch_size=16)
但我收到此错误消息:
Traceback (most recent call last):
File "C:\Users\maxime\Desktop\SESAME\PycharmProjects\LargeScale_2022_09_07\di3.py", line 31, in <module>
DLmodel.fit(train, label, epochs=10, batch_size=16)
File "C:\Python310\lib\site-packages\keras\utils\traceback_utils.py", line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "C:\Python310\lib\site-packages\keras\engine\data_adapter.py", line 1851, in _check_data_cardinality
raise ValueError(msg)
ValueError: Data cardinality is ambiguous:
x sizes: 100
y sizes: 10
Make sure all arrays contain the same number of samples.
我不明白我应该把什么作为
input_shape
用于 LSTM 层,并且由于我在训练期间使用的信号数量会发生变化,这对我来说不是很清楚。