为什么在使用蒙特卡罗 dropout 时我的模型预测显示多个预测的方差为零?

问题描述 投票:0回答:1

我正在使用 Keras(pytorch 后端,如果重要的话)开发图像分割 CNN。我的代码基于 UNET 分段代码(here,它在预测过程中利用蒙特卡洛 dropout 来近似模型预测中的不确定性。我使用无 dropout 来训练我的模型,并导出权重以在预测时重新导入当我在同一张图像上运行多个预测时,由于内层上的 50% 丢失率,我预计每个图像都会略有不同,但是对于同一图像上的第 0 个和第 20 个预测,我的输出完全相同。

有没有办法验证 dropout 是否确实触发?我的另一个想法是,加载模型会覆盖我重新配置的模型,同时打开 dropout,但是我已经切换到

model.load_weights()
而不是加载模型,以避免这种情况无济于事。

作为参考,我的玩具数据集(自行生成)是 256 x 512 灰度图像,我尝试在预测期间使用 50% 的丢失率。

filepath
是 .keras 文件,其中包含来自训练集的预先计算的权重(无 MCD)。

drop_rate = 0.5
drop_train = True  # MC dropout at inference
# drop_train=False #normal (no) dropout at inference
# downsize the UNET for this example.
# the smaller network is faster to train
# and produces excellent results on the dataset at hand
nfilters = (N_filters / 8).astype("int")

# input
input_tensor = Input(shape=frames_test_set.shape[1:], name="input_tensor")

## Encoder
# Encoder block 0
e0 = Conv2D(filters=nfilters[0], kernel_size=(3, 3), padding="same")(input_tensor)
e0 = BatchNormalization(axis=batch_normalization_axis)(e0)
e0 = Activation("relu")(e0)
e0 = Conv2D(filters=nfilters[0], kernel_size=(3, 3), padding="same")(e0)
e0 = BatchNormalization(axis=batch_normalization_axis)(e0)
e0 = Activation("relu")(e0)

# Encoder block 1
e1 = MaxPooling2D((2, 2))(e0)
e1 = Conv2D(filters=nfilters[1], kernel_size=(3, 3), padding="same")(e1)
e1 = BatchNormalization(axis=batch_normalization_axis)(e1)
e1 = Activation("relu")(e1)
e1 = Conv2D(filters=nfilters[1], kernel_size=(3, 3), padding="same")(e1)
e1 = BatchNormalization(axis=batch_normalization_axis)(e1)
e1 = Activation("relu")(e1)

# Encoder block 2
e2 = Dropout(drop_rate)(e1, training=drop_train)
e2 = MaxPooling2D((2, 2))(e2)
e2 = Conv2D(filters=nfilters[2], kernel_size=(3, 3), padding="same")(e2)
e2 = BatchNormalization(axis=batch_normalization_axis)(e2)
e2 = Activation("relu")(e2)
e2 = Conv2D(filters=nfilters[2], kernel_size=(3, 3), padding="same")(e2)
e2 = BatchNormalization(axis=batch_normalization_axis)(e2)
e2 = Activation("relu")(e2)

# Encoder block 3
e3 = Dropout(drop_rate)(e2, training=drop_train)
e3 = MaxPooling2D((2, 2))(e3)
e3 = Conv2D(filters=nfilters[3], kernel_size=(3, 3), padding="same")(e3)
e3 = BatchNormalization(axis=batch_normalization_axis)(e3)
e3 = Activation("relu")(e3)
e3 = Conv2D(filters=nfilters[3], kernel_size=(3, 3), padding="same")(e3)
e3 = BatchNormalization(axis=batch_normalization_axis)(e3)
e3 = Activation("relu")(e3)

# Encoder block 4
e4 = Dropout(drop_rate)(e3, training=drop_train)
e4 = MaxPooling2D((2, 2))(e4)
e4 = Conv2D(filters=nfilters[4], kernel_size=(3, 3), padding="same")(e4)
e4 = BatchNormalization(axis=batch_normalization_axis)(e4)
e4 = Activation("relu")(e4)
e4 = Conv2D(filters=nfilters[4], kernel_size=(3, 3), padding="same")(e4)
e4 = BatchNormalization(axis=batch_normalization_axis)(e4)
e4 = Activation("relu")(e4)
# e4 = MaxPooling2D((2, 2))(e4)

## Encoder
# Decoder block 3
d3 = Dropout(drop_rate)(e4, training=drop_train)
d3 = UpSampling2D(
    (2, 2),
)(d3)
d3 = concatenate([e3, d3], axis=-1)  # skip connection
d3 = Conv2DTranspose(nfilters[3], (3, 3), padding="same")(d3)
d3 = BatchNormalization(axis=batch_normalization_axis)(d3)
d3 = Activation("relu")(d3)
d3 = Conv2DTranspose(nfilters[3], (3, 3), padding="same")(d3)
d3 = BatchNormalization(axis=batch_normalization_axis)(d3)
d3 = Activation("relu")(d3)

# Decoder block 2
d2 = Dropout(drop_rate)(d3, training=drop_train)
d2 = UpSampling2D(
    (2, 2),
)(d2)
d2 = concatenate([e2, d2], axis=-1)  # skip connection
d2 = Conv2DTranspose(nfilters[2], (3, 3), padding="same")(d2)
d2 = BatchNormalization(axis=batch_normalization_axis)(d2)
d2 = Activation("relu")(d2)
d2 = Conv2DTranspose(nfilters[2], (3, 3), padding="same")(d2)
d2 = BatchNormalization(axis=batch_normalization_axis)(d2)
d2 = Activation("relu")(d2)

# Decoder block 1
d1 = UpSampling2D(
    (2, 2),
)(d2)
d1 = concatenate([e1, d1], axis=-1)  # skip connection
d1 = Conv2DTranspose(nfilters[1], (3, 3), padding="same")(d1)
d1 = BatchNormalization(axis=batch_normalization_axis)(d1)
d1 = Activation("relu")(d1)
d1 = Conv2DTranspose(nfilters[1], (3, 3), padding="same")(d1)
d1 = BatchNormalization(axis=batch_normalization_axis)(d1)
d1 = Activation("relu")(d1)

# Decoder block 0
d0 = UpSampling2D(
    (2, 2),
)(d1)
d0 = concatenate([e0, d0], axis=-1)  # skip connection
d0 = Conv2DTranspose(nfilters[0], (3, 3), padding="same")(d0)
d0 = BatchNormalization(axis=batch_normalization_axis)(d0)
d0 = Activation("relu")(d0)
d0 = Conv2DTranspose(nfilters[0], (3, 3), padding="same")(d0)
d0 = BatchNormalization(axis=batch_normalization_axis)(d0)
d0 = Activation("relu")(d0)

# output
# out_class = Dense(1)(d0)
out_class = Conv2D(1, (1, 1), padding="same")(d0)
out_class = Activation("sigmoid", name="output")(out_class)

# create and compile the model
model = Model(inputs=input_tensor, outputs=out_class)
model.compile(
    loss={"output": "binary_crossentropy"},
    metrics={"output": "accuracy"},
    optimizer="adam",
)

model.load_weights(f"{filepath}.keras")
Y_ts_hat = model.predict(frames_test_set, batch_size=1)

T = 10

Y_ts_hat_variance = np.zeros(
    (Y_ts_hat.shape[0], Y_ts_hat.shape[1], Y_ts_hat.shape[2], 1, T)
)

Y_ts_hat_variance[:, :, :, :, 0] = Y_ts_hat

for t in range(T - 1):
    print(f"Model {t+1}/{T-1}")
    Y_ts_hat_variance[:, :, :, :, t + 1] = model.predict(frames_test_set, batch_size=1)

arrays_indentical = (
    Y_ts_hat_variance[25, :, :, 0, 0] == Y_ts_hat_variance[25, :, :, 0, -1]
).all()
print(f"Arrays identical: {arrays_indentical}")

我已经通过 20 个预测运行模型,并检查第一个和最后一个预测的相同结果图像,看看它们是否相同。如果由于 MCD 涉及一些随机性,我希望这会返回 false,但是通过我的所有测试,它们总是相同的。

python keras conv-neural-network image-segmentation keras-layer
1个回答
0
投票

dropout 应该仅在“模型训练”期间起作用,而不是在图像值的预测期间起作用。 dropout的主要目的是阻止模型过度拟合,这仅在训练期间才有必要,因此在预测值时通常会被丢弃 我建议您不要找到在预测期间添加 dropout 的方法,因为在没有 dropout 的情况下网络通常会更准确(当然是在训练之后)

© www.soinside.com 2019 - 2024. All rights reserved.