LSTM时间序列数据分类模型精度低

问题描述 投票:0回答:1

我正在尝试使用 LSTM 层创建一个用于时间序列分析的模型,但是即使使用密集层且没有 LSTM 时,准确性也非常低。 数据是时间序列(合成谱),取决于 4 个参数。更改参数可以使用不同大小的数据集,其中每个样本或多或少与其他样本不同。但无论数据集大小如何,准确度始终低至 0.0 - 0.32 %。

使用 LSTM 进行模型:

print(trainset.shape)
print(testset.shape)
print(trainlabels.shape)

model = Sequential()

model.add(Masking(mask_value=0.0, input_shape=(trainset.shape[1], trainset.shape[2])))

model.add(LSTM(10, activation='relu', stateful=False, return_sequences=False))
model.add(Dropout(0.3))
model.add(Dense(len(trainlabels), activation='relu'))

model.compile(loss='sparse_categorical_crossentropy', 
              optimizer='Adam', metrics=['accuracy'])

print(model.summary())
model.fit(trainset, trainlabels, validation_data=(testset, testlabels), 
          epochs=3, batch_size=10)

scores = model.evaluate(testset, testlabels, verbose=0)
print("Accuracy: %.2f%%" % (scores[1]*100))

输出:

(2478, 600, 1)
(620, 600, 1)
(2478,)
Model: "sequential_7"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
masking_7 (Masking)          (None, 600, 1)            0         
_________________________________________________________________
lstm_7 (LSTM)                (None, 10)                480       
_________________________________________________________________
dropout_7 (Dropout)          (None, 10)                0         
_________________________________________________________________
dense_7 (Dense)              (None, 2478)              27258     
=================================================================
Total params: 27,738
Trainable params: 27,738
Non-trainable params: 0
_________________________________________________________________
None
Train on 2478 samples, validate on 620 samples
Epoch 1/3
2478/2478 [==============================] - 53s 22ms/step - loss: 8.9022 - accuracy: 4.0355e-04 - val_loss: 7.8152 - val_accuracy: 0.0016
Epoch 2/3
2478/2478 [==============================] - 54s 22ms/step - loss: 7.8152 - accuracy: 4.0355e-04 - val_loss: 7.8152 - val_accuracy: 0.0016
Epoch 3/3
2478/2478 [==============================] - 53s 21ms/step - loss: 7.8152 - accuracy: 4.0355e-04 - val_loss: 7.8152 - val_accuracy: 0.0016
Accuracy: 0.16%

训练数据中有些值为0.0,因此使用Masking。 我尝试过使用不同的损失、优化器、激活、Dropout 和层参数。即使添加更多密集层或更改批量大小后,结果也始终相同。

密集模型:

数据格式是2D而不是3D。

model=keras.Sequential([
    keras.layers.Masking(mask_value=0.0, input_shape=(trainset.shape[1],)),

    keras.layers.Dense(128, activation='relu'),
    keras.layers.Dropout(0.05),
    keras.layers.Dense(64, activation='relu'),
    keras.layers.Dropout(0.05),
    keras.layers.Dense(len(labels), activation='softmax')
])

model.compile(optimizer='adam', 
             loss='sparse_categorical_crossentropy',
             metrics=['accuracy'])

model.fit(np.uint8(trainset), np.uint8(trainlabels), epochs=100)

test_loss, test_acc=model.evaluate(np.uint8(testset), np.uint8(testlabels),


          verbose=2)
print(test_acc)

输出:

Train on 1239 samples
Epoch 1/100
1239/1239 [==============================] - 1s 1ms/sample - loss: 8.5421 - accuracy: 0.0033
Epoch 2/100
1239/1239 [==============================] - 0s 371us/sample - loss: 6.2039 - accuracy: 0.0025
Epoch 3/100
1239/1239 [==============================] - 0s 347us/sample - loss: 5.6502 - accuracy: 0.0033
****
Epoch 97/100
1239/1239 [==============================] - 0s 380us/sample - loss: 0.1472 - accuracy: 0.9746
Epoch 98/100
1239/1239 [==============================] - 0s 364us/sample - loss: 0.1562 - accuracy: 0.9680
Epoch 99/100
1239/1239 [==============================] - 1s 408us/sample - loss: 0.1511 - accuracy: 0.9721
Epoch 100/100
1239/1239 [==============================] - 0s 378us/sample - loss: 0.1719 - accuracy: 0.9680
310/1 - 0s - loss: 18.6845 - accuracy: 0.0000e+00
0.0

这个模型的损失非常低,但准确性也非常低。

我的数据应该使用什么样的模型架构?

预先感谢您帮助学习这些东西!

keras lstm
1个回答
0
投票

尝试使用 sigmoid 激活,并尝试不使用 dropout。

© www.soinside.com 2019 - 2024. All rights reserved.