为什么我的LSTM层不断抛出错误? RNN

问题描述 投票:0回答:1

我有一个rnn,想要输入长度为50的句子,并且输出的长度相同。 (对于聊天机器人)。有谁知道为什么这个错误:

ValueError: Input 0 of layer lstm is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: [None, 5000]

出现基音?这是代码:

def model():
    model = Sequential()
    model.add(Embedding(vocab_size, 100, input_length=l))
    model.add(Flatten())
    model.add(LSTM(100, return_sequences=True))
    model.add(LSTM(100))
    model.add(Dense(100, activation='relu'))
    model.add(Dense(vocab_size, activation='softmax'))
    model.summary()
    return model
model = model()
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(padded_x, padded_y, batch_size=128, epochs=100)

两个数组的形状均为5000、50 .... 5000个句子,每个句子含50个单词。它们已经被编码。我首先虽然是因为我正在展平...但这是展平之前的错误:

ValueError: A target array with shape (5000, 50) was passed for an output of shape (None, 12097) while using as loss `categorical_crossentropy`. This loss expects targets to have the same shape as the output.

## BTW vocab_size为12097 ##

python arrays lstm recurrent-neural-network word-embedding
1个回答
0
投票

不要压扁。您期望的输出大小为50,因此在最后一个密集层中需要50个神经元。

def model():
    model = Sequential()
    model.add(Embedding(vocab_size, 100, input_length=l))
    model.add(Flatten())
    model.add(LSTM(100, return_sequences=True))
    model.add(LSTM(100))
    model.add(Dense(100, activation='relu'))
    model.add(Dense(50, activation='softmax'))
    model.summary()
    return model
© www.soinside.com 2019 - 2024. All rights reserved.