我只是想和Keras一起玩,但是我在尝试教它一个基本功能(乘以2)时遇到了一些麻烦。我的设置如下。由于我是新手,我在评论中添加了我认为每一步都会发生的事情。
x_train = np.linspace(1,1000,1000)
y_train=x_train*2
model = Sequential()
model.add(Dense(32, input_dim=1, activation='sigmoid')) #add a 32-node layer
model.add(Dense(32, activation='sigmoid')) #add a second 32-node layer
model.add(Dense(1, activation='sigmoid')) #add a final output layer
model.compile(loss='mse',
optimizer='rmsprop') #compile it with loss being mean squared error
model.fit(x_train,y_train, epochs = 10, batch_size=100) #train
score = model.evaluate(x_train,y_train,batch_size=100)
print(score)
我得到以下输出:
1000/1000 [==============================] - 0s 355us/step - loss: 1334274.0375
Epoch 2/10
1000/1000 [==============================] - 0s 21us/step - loss: 1333999.8250
Epoch 3/10
1000/1000 [==============================] - 0s 29us/step - loss: 1333813.4062
Epoch 4/10
1000/1000 [==============================] - 0s 28us/step - loss: 1333679.2625
Epoch 5/10
1000/1000 [==============================] - 0s 27us/step - loss: 1333591.6750
Epoch 6/10
1000/1000 [==============================] - 0s 51us/step - loss: 1333522.0000
Epoch 7/10
1000/1000 [==============================] - 0s 23us/step - loss: 1333473.7000
Epoch 8/10
1000/1000 [==============================] - 0s 24us/step - loss: 1333440.6000
Epoch 9/10
1000/1000 [==============================] - 0s 29us/step - loss: 1333412.0250
Epoch 10/10
1000/1000 [==============================] - 0s 21us/step - loss: 1333390.5000
1000/1000 [==============================] - 0s 66us/step
['loss']
1333383.1143554687
看起来这个基本功能的损失非常高,我很困惑为什么它无法学习它。我感到困惑,还是我做错了什么?
relu
激活。adam
而不是rmsprop
,它几乎总是更好。总而言之,我得到以下输出:
Epoch 860/1000
1000/1000 [==============================] - 0s 29us/step - loss: 5.1868e-08