def buildModel(optimizer):
model = tf.keras.models.Sequential([
Dense(100, activation = 'relu'),
Dense(82, activation = 'relu'),
Dense(20, activation = 'relu'),
Dense(6, activation = 'relu'),
Dense(20, activation = 'softmax')
])
model.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=['accuracy'])
return model
tf.keras.optimizers.legacy.Adam()
model = buildModel('adam')
history = model.fit(train_x,train_y_lst, validation_data=(test_x, test_y_lst),epochs = 50,batch_size = 32,verbose = 0)
绘图
plt.figure()
plt.plot(history.history['loss'], label='Training Loss')
plt.plot(history.history['val_loss'], label='Validation Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.title('Training and Validation Loss Curves')
plt.legend()
# Plot training and validation accuracy
plt.figure()
plt.plot(history.history['accuracy'], label='Training Accuracy')
plt.plot(history.history['val_accuracy'], label='Validation Accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.title('Training and Validation Accuracy Curves')
plt.legend()
plt.show()
测试的准确性也很差
我是新人,有什么建议我可能会出错吗?
我预计测试损失会像火车损失一样减少。
我的 test_x 看起来像这样
-0.84335711]
[-0.1898388 -1.4177287 0.24718753 ... -0.33010045 0.77921928
-1.56813293]
[ 0.51887204 -1.34965479 0.19069737 ... 0.56236361 -0.03741466
-0.24596578]
...
[-0.11631875 0.46366703 -1.04400684 ... 0.23282911 -2.10649511
-0.41883463]
[-1.03632829 0.05419996 -2.22371652 ... 0.47133847 -1.70391277
-1.42387687]
[-0.12011524 -0.72294703 -0.74587529 ... 0.11331488 -1.81362912
-0.11828704]]
测试_y_lst
array([[1, 0, 0, ..., 0, 0, 0],
[1, 0, 0, ..., 0, 0, 0],
[1, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]])
多分类问题。
由于您似乎对这个概念很陌生,我将告诉您一些可以在此处或一般情况下使用 NeuralNets 改进结果的方法。
首先尝试在 (0,1) 之间缩放输入数据,并对一层或两层使用 dropout 并查看结果。