Keras与Numpy的数值误差

问题描述 投票:0回答:1

为了真正理解卷积层,我在基本numpy中重新实现了单个keras Conv2D层的正向方法。两种接缝的输出几乎相同,但是有一些细微的差异。

获取keras输出:

inp = K.constant(test_x)
true_output = model.layers[0].call(inp).numpy()

我的输出:

def relu(x):
    return np.maximum(0, x)


def forward(inp, filter_weights, filter_biases):

    result = np.zeros((1, 64, 64, 32))

    inp_with_padding = np.zeros((1, 66, 66, 1))
    inp_with_padding[0, 1:65, 1:65, :] = inp

    for filter_num in range(32):
        single_filter_weights = filter_weights[:, :, 0, filter_num]

        for i in range(64):
            for j in range(64):
                prod = single_filter_weights * inp_with_padding[0, i:i+3, j:j+3, 0]
                filter_sum = np.sum(prod) + filter_biases[filter_num]
                result[0, i, j, filter_num] = relu(filter_sum)
    return result


my_output = forward(test_x, filter_weights, biases_weights)

结果大致相同,但以下是一些差异示例:

Mine: 2.6608338356018066
Keras: 2.660834312438965

Mine: 1.7892705202102661
Keras: 1.7892701625823975

Mine: 0.007190803997218609
Keras: 0.007190565578639507

Mine: 4.970898151397705
Keras: 4.970897197723389

我已经尝试将所有内容都转换为float32,但这无法解决。有什么想法吗?

python numpy keras floating-point precision
1个回答
0
投票

鉴于差异有多小,我想说它们是舍入误差。我建议使用np.isclose np.isclose(或math.isclose)检查浮点数是否为“相等”。

© www.soinside.com 2019 - 2024. All rights reserved.