Keras LSTM - 验证损失从 Epoch #1 开始增加

问题描述 投票:0回答:3

我目前正在进行我的第一个“真正的”深度学习项目(令人惊讶)预测股票走势。我知道我会以 1000:1 的比例去做任何有用的事情,但我很享受它,并希望坚持到底。我在几周的尝试中学到的东西比我在完成 MOOC 的前 6 个月中学到的还要多。

我正在使用 Keras 构建一个 LSTM,用于当前预测下一步的前进,并尝试将任务作为分类(上/下/稳定)和现在的回归问题。两者都会导致类似的障碍,因为我的验证损失从未从第 1 纪元开始改善。

我可以让模型过度拟合,使得 MSE 训练损失接近于零(如果分类则为 100% 准确率),但验证损失在任何阶段都不会减少。这对我未经训练的眼睛来说太适合了,所以我添加了不同数量的 dropout,但所做的只是抑制了模型/训练准确性的学习,并且验证准确性没有任何改进。

我尝试更改大量超参数 - 学习率、优化器、批量大小、回溯窗口、#layers、#units、dropout、#samples 等,还尝试了数据子集和特征子集,但我不能无法让它工作,所以我非常感谢您的帮助。

下面的代码(我知道它不太漂亮):

# Import saved full dataframe ~ 200 features
import feather
df = feather.read_dataframe('df_feathered')
df.set_index('time', inplace=True)

# Difference the dataset to make stationary
df = df.diff(periods=1, axis=0)

# MAKE LARGE SAMPLE FOR TESTING
df_train = df.loc['2017-3-1':'2017-6-30']
df_val = df.loc['2017-7-1':'2017-8-31']
df_test = df.loc['2017-9-1':'2017-9-30']

# Make x_train, x_val sets by dropping target variable
x_train = df_train.drop('close+1', axis=1)
x_val = df_val.drop('close+1', axis=1)

# Scale the training data first then fit the transform to the test set
scaler = StandardScaler()
x_train = scaler.fit_transform(x_train)
x_test = scaler.transform(x_val)

# scaler = MinMaxScaler(feature_range=(0,1))
# x_train = scaler.fit_transform(df_train1)
# x_test = scaler.transform(df_val1)

# Create y_train, y_test, simply target variable for regression
y_train = df_train['close+1']
y_test = df_val['close+1']

# Define Lookback window for LSTM input
sliding_window = 15

# Convert x_train, x_test, y_train, y_test into 3d array (samples, 
timesteps, features) for LSTM input
dataXtrain = []
for i in range(len(x_train)-sliding_window-1):
        a = x_train[i:(i+sliding_window), 0:(x_train.shape[1])]
        dataXtrain.append(a)

dataXtest = []
for i in range(len(x_test)-sliding_window-1):
        a = x_test[i:(i+sliding_window), 0:(x_test.shape[1])]
        dataXtest.append(a)

dataYtrain = []
for i in range(len(y_train)-sliding_window-1):
        dataYtrain.append(y_train[i + sliding_window])

dataYtest = []
for i in range(len(y_test)-sliding_window-1):
        dataYtest.append(y_test[i + sliding_window])

# Make data the divisible by a variety of batch_sizes for training
# Started at 1000 to not include replaced NaN values
dataXtrain = np.array(dataXtrain[1000:172008])
dataYtrain = np.array(dataYtrain[1000:172008])
dataXtest = np.array(dataXtest[1000:83944])
dataYtest = np.array(dataYtest[1000:83944])

# Checking input shapes
print('dataXtrain size is: {}'.format((dataXtrain).shape))
print('dataXtest size is: {}'.format((dataXtest).shape))
print('dataYtrain size is: {}'.format((dataYtrain).shape))
print('dataYtest size is: {}'.format((dataYtest).shape))

### ACTUAL LSTM MODEL

batch_size = 256
timesteps = dataXtrain.shape[1]
features = dataXtrain.shape[2]

# Model set-up, stacked 4 layer stateful LSTM
model = Sequential()
model.add(LSTM(512, return_sequences=True, stateful=True, 
               batch_input_shape=(batch_size, timesteps, features)))
model.add(LSTM(256,stateful=True, return_sequences=True))
model.add(LSTM(256,stateful=True, return_sequences=True))
model.add(LSTM(128,stateful=True))
model.add(Dense(1, activation='linear'))         

model.summary()

reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.9, patience=5, min_lr=0.000001, verbose=1)

def coeff_determination(y_true, y_pred):
    from keras import backend as K
    SS_res =  K.sum(K.square( y_true-y_pred ))
    SS_tot = K.sum(K.square( y_true - K.mean(y_true) ) )
    return ( 1 - SS_res/(SS_tot + K.epsilon()) )

model.compile(loss='mse',
              optimizer='nadam',
              metrics=[coeff_determination,'mse','mae','mape'])

history = model.fit(dataXtrain, dataYtrain,validation_data=(dataXtest, dataYtest),
          epochs=100,batch_size=batch_size, shuffle=False, verbose=1, callbacks=[reduce_lr])

score = model.evaluate(dataXtest, dataYtest,batch_size=batch_size, verbose=1)
print(score)

predictions = model.predict(dataXtest, batch_size=batch_size)
print(predictions)

import matplotlib.pyplot as plt
%matplotlib inline
#plt.plot(history.history['mean_squared_error'])
#plt.plot(history.history['val_mean_squared_error'])
plt.plot(history.history['coeff_determination'])
plt.plot(history.history['val_coeff_determination'])
#plt.plot(history.history['mean_absolute_error'])
#plt.plot(history.history['mean_absolute_percentage_error'])
#plt.plot(history.history['val_mean_absolute_percentage_error'])
#plt.title("MSE")
plt.ylabel("R2")
plt.xlabel("epoch")
plt.legend(["train", "val"], loc="best")
plt.show()

plt.plot(history.history["loss"][5:])
plt.plot(history.history["val_loss"][5:])
plt.title("model loss")
plt.ylabel("loss")
plt.xlabel("epoch")
plt.legend(["train", "val"], loc="best")
plt.show()

plt.figure(figsize=(20,8))
plt.plot(dataYtest)
plt.plot(predictions)
plt.title("Prediction")
plt.ylabel("Price")
plt.xlabel("Time")
plt.legend(["Truth", "Prediction"], loc="best")
plt.show()
python machine-learning keras deep-learning data-science
3个回答
2
投票

也许您应该记住,您正在预测袜子的回报,但很可能什么也预测不到。所以

val_loss
的增加根本就不是过度拟合。也许您应该考虑添加更多层来增强其功能,而不是添加更多的 dropout。


0
投票

尝试大幅降低学习率(并暂时删除 dropouts)。

为什么使用

shuffle=False

在 fit() 函数中?


0
投票

尝试更改优化器和学习率。正则化在过度拟合的情况下非常有效。

© www.soinside.com 2019 - 2024. All rights reserved.