Keras NLP验证损失增加,而训练精度却在增加

问题描述 投票:0回答:1

我看了其他有类似问题的帖子,似乎我的模型是过拟合的。然而,我已经尝试了正则化、dropout、减少参数、降低学习率和改变损失函数,但似乎没有任何帮助。

这是我的模型。

model = Sequential([
Embedding(max_words, 64),
Dropout(.5),
Bidirectional(GRU(64, return_sequences = True), merge_mode='concat'),
GlobalMaxPooling1D(),
Dense(64),
Dropout(.5),
Dense(1, activation='sigmoid')
])
model.summary()

model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(x_train,y_train, batch_size=32, epochs=25, verbose=1, validation_data=(x_test, y_test),shuffle=True)

和我的训练输出。

Model: "sequential_3"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding_3 (Embedding)      (None, None, 64)          320000    
_________________________________________________________________
dropout_6 (Dropout)          (None, None, 64)          0         
_________________________________________________________________
bidirectional_3 (Bidirection (None, None, 128)         49920     
_________________________________________________________________
global_max_pooling1d_3 (Glob (None, 128)               0         
_________________________________________________________________
dense_3 (Dense)              (None, 64)                8256      
_________________________________________________________________
dropout_7 (Dropout)          (None, 64)                0         
_________________________________________________________________
dense_4 (Dense)              (None, 1)                 65        
=================================================================
Total params: 378,241
Trainable params: 378,241
Non-trainable params: 0
_________________________________________________________________
Epoch 1/25
229/229 [==============================] - 7s 32ms/step - loss: 0.6952 - accuracy: 0.4939 - val_loss: 0.6923 - val_accuracy: 0.5240
Epoch 2/25
229/229 [==============================] - 7s 30ms/step - loss: 0.6917 - accuracy: 0.5144 - val_loss: 0.6973 - val_accuracy: 0.4815
Epoch 3/25
229/229 [==============================] - 7s 30ms/step - loss: 0.6709 - accuracy: 0.5881 - val_loss: 0.7164 - val_accuracy: 0.4784
Epoch 4/25
229/229 [==============================] - 7s 30ms/step - loss: 0.6070 - accuracy: 0.6711 - val_loss: 0.7704 - val_accuracy: 0.4977
Epoch 5/25
229/229 [==============================] - 7s 30ms/step - loss: 0.5370 - accuracy: 0.7325 - val_loss: 0.8411 - val_accuracy: 0.4876
Epoch 6/25
229/229 [==============================] - 7s 30ms/step - loss: 0.4770 - accuracy: 0.7714 - val_loss: 0.9479 - val_accuracy: 0.4784
Epoch 7/25
229/229 [==============================] - 7s 30ms/step - loss: 0.4228 - accuracy: 0.8016 - val_loss: 1.0987 - val_accuracy: 0.4884
Epoch 8/25
229/229 [==============================] - 7s 30ms/step - loss: 0.3697 - accuracy: 0.8344 - val_loss: 1.2714 - val_accuracy: 0.4760
Epoch 9/25
229/229 [==============================] - 7s 30ms/step - loss: 0.3150 - accuracy: 0.8582 - val_loss: 1.4184 - val_accuracy: 0.4822
Epoch 10/25
229/229 [==============================] - 7s 31ms/step - loss: 0.2725 - accuracy: 0.8829 - val_loss: 1.6053 - val_accuracy: 0.4946
Epoch 11/25
229/229 [==============================] - 7s 31ms/step - loss: 0.2277 - accuracy: 0.9056 - val_loss: 1.8131 - val_accuracy: 0.4884
Epoch 12/25
229/229 [==============================] - 7s 31ms/step - loss: 0.1929 - accuracy: 0.9253 - val_loss: 1.9327 - val_accuracy: 0.4977
Epoch 13/25
229/229 [==============================] - 7s 30ms/step - loss: 0.1717 - accuracy: 0.9318 - val_loss: 2.2280 - val_accuracy: 0.4900
Epoch 14/25
229/229 [==============================] - 7s 30ms/step - loss: 0.1643 - accuracy: 0.9324 - val_loss: 2.2811 - val_accuracy: 0.4915
Epoch 15/25
229/229 [==============================] - 7s 30ms/step - loss: 0.1419 - accuracy: 0.9439 - val_loss: 2.4530 - val_accuracy: 0.4830
Epoch 16/25
229/229 [==============================] - 7s 30ms/step - loss: 0.1255 - accuracy: 0.9521 - val_loss: 2.6692 - val_accuracy: 0.4992
Epoch 17/25
229/229 [==============================] - 7s 30ms/step - loss: 0.1124 - accuracy: 0.9558 - val_loss: 2.8106 - val_accuracy: 0.4892
Epoch 18/25
229/229 [==============================] - 7s 30ms/step - loss: 0.1130 - accuracy: 0.9556 - val_loss: 2.6792 - val_accuracy: 0.4907
Epoch 19/25
229/229 [==============================] - 7s 30ms/step - loss: 0.1085 - accuracy: 0.9610 - val_loss: 2.8966 - val_accuracy: 0.5093
Epoch 20/25
229/229 [==============================] - 7s 30ms/step - loss: 0.0974 - accuracy: 0.9656 - val_loss: 2.8636 - val_accuracy: 0.5147
Epoch 21/25
229/229 [==============================] - 7s 30ms/step - loss: 0.0921 - accuracy: 0.9663 - val_loss: 2.9874 - val_accuracy: 0.4977
Epoch 22/25
229/229 [==============================] - 7s 30ms/step - loss: 0.0888 - accuracy: 0.9685 - val_loss: 3.0295 - val_accuracy: 0.4969
Epoch 23/25
229/229 [==============================] - 7s 30ms/step - loss: 0.0762 - accuracy: 0.9731 - val_loss: 3.0607 - val_accuracy: 0.4884
Epoch 24/25
229/229 [==============================] - 7s 30ms/step - loss: 0.0842 - accuracy: 0.9692 - val_loss: 3.0552 - val_accuracy: 0.4900
Epoch 25/25
229/229 [==============================] - 7s 30ms/step - loss: 0.0816 - accuracy: 0.9693 - val_loss: 2.9571 - val_accuracy: 0.5015

我的验证损失似乎总是在不断增加 我试图从推文中预测政治派别。我使用的数据集在其他模型上效果很好,所以可能我的数据预处理反而有问题?

import pandas as pd
dataset = pd.read_csv('political_tweets.csv')
dataset.head()
dataset = pd.read_csv('political_tweets.csv')["tweet"].values
y_train = pd.read_csv('political_tweets.csv')["dem_or_rep"].values

from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(dataset, y_train, test_size=0.15, shuffle=True)
print(x_train[0])
print(x_test[0])
max_words = 10000
max_len = 25

tokenizer = Tokenizer(num_words = max_words, filters='!"#$%&()*+,-./:;<=>?@[\\]^_`{|}~\t\n1234567890', lower=False,oov_token="<OOV>")

tokenizer.fit_on_texts(x_train)

x_train = tokenizer.texts_to_sequences(x_train)
x_train = pad_sequences(x_train, max_len, padding='post', truncating='post')

tokenizer.fit_on_texts(x_test)
x_test = tokenizer.texts_to_sequences(x_test)
x_test = pad_sequences(x_test, max_len, padding='post', truncating='post')

我真的是被难住了。感谢任何帮助。

tensorflow keras nlp word-embedding hyperparameters
1个回答
1
投票

你在做二元分类,你的验证准确率接近50%。这只是说明你的模型没有学到什么有用的东西,相当于随机预测。

你的训练精度真的很高,这说明你的模型严重过拟合。

  1. 不要在嵌入层之后应用dropout,它会把一切都搞乱。

  2. 移除这个 Dense(64), 之后 GlobalPooling.

  3. 使用 recurrent_dropout 在GRU。

  4. 训练较少的时代。

  5. 减少词汇量,去掉停顿词。可能是噪音太大,因为你的序列长度只有25个,嘈杂的停顿词可以欺骗模型。

import nltk
from nltk.corpus import stopwords
set(stopwords.words('english'))
  1. 你的模型还是过拟合。尝试减少嵌入 output_dim 和GRU units 都有很多组合。
© www.soinside.com 2019 - 2024. All rights reserved.