我尝试创建混合 CNN 和 LSTM 模型。我遇到了与架构形状相关的问题。这导致epoch无法跑完数据200次。
我的数据大小是(96,2)
错误:
Epoch 1/200
178/Unknown 9s 34ms/step - loss: 1.2366 - mse: 5.4560
---------------------------------------------------------------------------
InvalidArgumentError Traceback (most recent call last)
Cell In[40], line 4
2 is_train = True
3 if is_train:
----> 4 model_create.fit(train_dataset, epochs=200 , batch_size=128)
Cannot add tensor to the batch: number of elements does not match. Shapes are: [tensor]: [78,2], [batch]: [96,2]
[[{{node IteratorGetNext}}]] [Op:__inference_one_step_on_iterator_23678]
CNN-LSTM 模型:
def create_model_architecture():
model_cnn = tf.keras.models.Sequential([
tf.keras.layers.Conv1D(filters=64,
kernel_size=3,
activation='relu',
input_shape=input_data_shape),
tf.keras.layers.MaxPooling1D(pool_size=2,strides=1, padding="same"),
tf.keras.layers.Conv1D(filters=64,
kernel_size=3,
activation='relu'),
tf.keras.layers.MaxPooling1D(pool_size=2,strides=1, padding="same"),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.LSTM(32, return_sequences=True),
tf.keras.layers.LSTM(16),
tf.keras.layers.Reshape((-1,16)),
#tf.keras.layers.Flatten(),
tf.keras.layers.Dense(1, activation='sigmoid')
])
return model_cnn
编译模型
def create_model():
tf.random.set_seed(51)
model_create = create_model_architecture()
#model_create = create_LSTM_model()
model_create.compile(loss=tf.keras.losses.Huber(),
optimizer=tf.keras.optimizers.Adam(learning_rate=0.001),
metrics=["mse"])
return model_create
model_create = create_model()
model_create.summary()
model_create.fit(train_dataset, epochs=200 , batch_size=128)
我曾尝试在 flatten() 函数之前添加 reshape 来改变形状。我还减小了批量大小和纪元大小。这些都不起作用。如何将我的模型与 train_data 相匹配?
Cannot add tensor to the batch: number of elements does not match. Shapes are: [tensor]: [78,2], [batch]: [96,2]
此错误的原因与您的模型架构无关,而是由于您的数据管道所致,因为输入数据的元素并不都具有完全相同的形状(如果您要对数据集进行批处理,则这是必需的 -当您使用
.fit(train_dataset, epochs=200 , batch_size=128)
或明确调用 train_dataset.batch(N)
时)。
可能的解决方案