分组深度卷积性能

问题描述 投票:1回答:2

我正在尝试提高Tensorflow中ResNeXt实现的性能。 David Berthelot提到了对on twitter的潜在改进。我想将此应用于我的实现 - 重塑+总和如何适应这个?

# one resnext block per figure 3c
# see also https://arxiv.org/pdf/1611.05431.pdf
def bottleneck(x, strides, dim):
  x = tf.layers.conv2d(x, filters=64, kernel_size=1, strides=strides)
  x = tf.layers.batch_normalization(x, training=is_training)
  x = tf.nn.relu(x)
  w = tf.get_variable(name='depthwise_filter', shape=[3, 3, 64, cardinality])
  x = tf.nn.depthwise_conv2d_native(x, w, strides=1, padding='same')
  x = tf.layers.batch_normalization(x, training=is_training)
  x = tf.nn.relu(x)
  x = tf.layers.conv2d(x, filters=dim, kernel_size=1, strides=1)
  x = tf.layers.batch_normalization(x, training=is_training)
  return tf.nn.relu(x)

编辑:我认为这个实现是正确的,我只需要添加一些操作来提高性能。再看看大卫的评论,深度+重塑+总和不是单一的深度操作,而是一些其他方法;上面的代码不计算瓶颈块版本3d的等价物。

tensorflow convolution
2个回答
2
投票

这是我实现它的方式

class LayerCardinalConv(object):
"""Aggregated Residual Transformations for Deep Neural Networks https://arxiv.org/abs/1611.05431"""

def __init__(self, name, w, nin, card, use_bias=True, init='he'):
    self.group = nin // card
    with tf.name_scope(name):
        self.conv = tf.Variable(weight_init(nin, self.group, [*w, nin, self.group], init), name='conv')
        self.bias = tf.Variable(tf.zeros([nin]), name='bias') if use_bias else 0

def __call__(self, vin, train):
    s = tf.shape(vin)
    vout = tf.nn.depthwise_conv2d(vin, self.conv, strides=[1] * 4, padding='SAME')
    vout = tf.reshape(vout, [s[0], s[1], s[2], self.group, s[3]])
    vout = tf.reduce_sum(vout, 3)
    return vout + self.bias

笔记:

  • w是例如内核形状(3,3)
  • nin输入通道数
  • 基数或组数

希望能帮助到你。


1
投票

深度卷积和分组卷积非常相似。分组卷积在多个通道组中应用一组独立的内核,而深度卷积为每个输入通道应用一组独立的内核。至关重要的是,在这两种情况下,输入和输出通道之间的各个连接都使用在两种情况下都不与任何其他输入 - 输出通道对共享的权重。结果,我们可以应用(正如那个人所说的!)一个重塑和总和来模拟带深度卷积的分组卷积。这种方法是以牺牲内存为代价的,因为我们必须分配一个多倍的张量来执行中间计算。

深度卷积将各个输入通道映射到多个输出通道,并且分组卷积将输入通道的块映射到输出通道的块。如果我们想要应用具有32个组或128个通道输入的分组卷积,我们可以改为应用深度卷积,其中信道乘数为128/32 = 4。输出张量表示等效分组卷积输出的分解版本 - 深度卷积输出的前16个通道对应于分组卷积输出的前四个通道。我们可以将这些通道重新整形为一组4x4空间,并沿其中一个新轴求和,以实现相等的分组卷积输出。在所有输出通道中,我们只需添加两个维度为4的新轴,求和,然后重新形成128个通道。

# one resnext block per figure 3c
# see also https://arxiv.org/pdf/1611.05431.pdf
def bottleneck(x, strides, dim, is_training):
  input_channels = x.shape.as_list()[-1]
  bottleneck_depth = input_channels // 2
  x = tf.layers.conv2d(x, filters=bottleneck_depth, kernel_size=1, strides=strides)
  x = tf.layers.batch_normalization(x, training=is_training)
  x = tf.nn.relu(x)

  group_size = bottleneck_depth // cardinality
  w = tf.get_variable(name='depthwise_filter', shape=[3, 3, bottleneck_depth, group_size])
  x = tf.nn.depthwise_conv2d_native(x, w, strides=1, padding='same')
  depthwise_shape = x.shape.as_list()
  x = tf.reshape(x, depthwise_shape[:3] + [cardinality, group_size, group_size])
  x = tf.reduce_sum(x, axis=4)
  x = tf.reshape(x, depthwise_shape[:3] + [bottleneck_depth])

  x = tf.layers.batch_normalization(x, training=is_training)
  x = tf.nn.relu(x)
  x = tf.layers.conv2d(x, filters=dim, kernel_size=1, strides=1)
  x = tf.layers.batch_normalization(x, training=is_training)
  return tf.nn.relu(x)

编辑:似乎我没有正确地制定重塑/总和。我已经更新了上面的代码示例,以反映我现在认为正确的转换。较旧的版本可以简化为深度方向卷积,channel_multiplier为1。

我将使用权重固定为1的numpy来说明错误和正确的行为,以便更好地理解差异。我们将看一个更简单的8通道输入,有两组。

input = np.arange(8)
# => [0, 1, 2, 3, 4, 5, 6, 7]
# the result of applying a depthwise convolution with a channel multiplier of 2 and weights fixed at 1
depthwise_output = output.repeat(input, 4)
# => [0, 0, 0, 0, 1, 1, 1, 1, 2, 2, ..., 6, 6, 7, 7, 7, 7]

转换不正确:

x = depthwise_output.reshape((8, 4))
# => [[0, 0, 0, 0],
#     [1, 1, 1, 1],
#     [2, 2, 2, 2],
#     [3, 3, 3, 3],
#     [4, 4, 4, 4],
#     [5, 5, 5, 5],
#     [6, 6, 6, 6],
#     [7, 7, 7, 7]]
x = x.sum(axis=1)
# => [ 0,  4,  8, 12, 16, 20, 24, 28]

正确转型:

x = depthwise_output.reshape((2, 4, 4))
# => [[[0, 0, 0, 0],
#      [1, 1, 1, 1],
#      [2, 2, 2, 2],
#      [3, 3, 3, 3]],
# 
#     [[4, 4, 4, 4],
#      [5, 5, 5, 5],
#      [6, 6, 6, 6],
#      [7, 7, 7, 7]]]
x = x.sum(axis=1)
# => [[ 6,  6,  6,  6],
#     [22, 22, 22, 22]])
x = x.reshape((8,))
# => [ 6,  6,  6,  6, 22, 22, 22, 22]
© www.soinside.com 2019 - 2024. All rights reserved.