我正在实施一个分类器以识别3种不同类型的图像,我的最后一层具有3个具有S型激活的神经元
from keras.model import Sequential
from keras.layers import Conv2D, MaxPooling2D, BatchNormalization, Dropout, Flatten, Dense
model = Sequential()
model.add(Conv2D(16, kernel_size=(3, 3), activation='relu',
input_shape=input_shape))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(BatchNormalization())
model.add(Dropout(0.2))
# more conv layers
model.add(Flatten())
model.add(Dense(512, activation='relu'))
model.add(Dense(3, activation='sigmoid'))
model.compile(loss='categorical_crossentropy', optimizer=keras.optimizers.Adam(), metrics=['accuracy'])
训练集标签使用一种热编码,并且针对这三个类别都有丰富的训练示例。
但是当我在测试集上运行model.predict(X)
时,前10个输出是
[[0. 1. 1.]
[1. 1. 1.]
[1. 1. 1.]
[1. 1. 1.]
[0. 1. 1.]
[1. 1. 1.]
[0. 1. 1.]
[1. 1. 1.]
[1. 1. 1.]
[1. 1. 1.]]
[model.predict()
应该输出概率,并且每一行的总和应该为1,但是在实际结果中,有时每个类别的概率都为1。有人知道为什么这样出现概率吗?]
X = np.random.randint(0,10, (1000,100))
y = np.random.randint(0,3, 1000)
model = Sequential([
Dense(128, input_dim = 100),
Dense(3, activation='softmax'),
])
model.summary()
model.compile(loss='sparse_categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
history = model.fit(X, y, epochs=3)
否则,如果您已对目标进行一次热编码以具有2D形状(n_samples,n_class),则可以使用categorical_crossentropy和softmax作为最终激活
X = np.random.randint(0,10, (1000,100))
y = pd.get_dummies(np.random.randint(0,3, 1000)).values
model = Sequential([
Dense(128, input_dim = 100),
Dense(3, activation='softmax'),
])
model.summary()
model.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
history = model.fit(X, y, epochs=3)