为什么val_loss太低,但是测试均方根误差更高?

问题描述 投票:0回答:1

我正在训练LSTM,其中数据集是17568行,每个月有5分钟的2个月的监视值。

模型是:'''

model = Sequential()
model.add(LSTM(300, activation='relu', input_shape=(X_train.shape[1], X_train.shape[2]),return_sequences=True))
model.add(Dropout(0.1))
model.add(LSTM(300, activation='relu'))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
history = model.fit(X_train, Y_train, epochs=100, batch_size=70, validation_data=(X_test, Y_test), 
                    callbacks=[EarlyStopping(monitor='val_loss',patience=10, verbose=1)], verbose=1, shuffle=False)
model.summary()

'''

识别RMSE的代码为:'''

train_predict = model.predict(X_train)
test_predict = model.predict(X_test)
# invert predictions
train_predict = scaler.inverse_transform(train_predict)
Y_train = scaler.inverse_transform([Y_train])
test_predict = scaler.inverse_transform(test_predict)
Y_test = scaler.inverse_transform([Y_test])
print('Train Mean Absolute Error:', mean_absolute_error(Y_train[0], train_predict[:,0]))
print('Train Root Mean Squared Error:',np.sqrt(mean_squared_error(Y_train[0], train_predict[:,0])))
print('Test Mean Absolute Error:', mean_absolute_error(Y_test[0], test_predict[:,0]))
print('Test Root Mean Squared Error:',np.sqrt(mean_squared_error(Y_test[0], test_predict[:,0])))

'''

现在我的问题是val_loss = 0.0017和损失= 0.0019

但是RMSE是:'''

Train Mean Absolute Error: 10.814174578676965
Train Root Mean Squared Error: 13.792484521895835
Test Mean Absolute Error: 8.059164253166095
Test Root Mean Squared Error: 10.6127240648618

'''请帮助我了解我在哪里做错了?我试图在最近三天内了解这一点。但是我不能。请救我的命

python keras neural-network lstm query-performance
1个回答
0
投票

[val_loss和loss是在训练SCALED数据时计算的,而mae和rmse是在INVERSESCALED数据后计算的,这是真实的表现

© www.soinside.com 2019 - 2024. All rights reserved.