火炬矢量的输入形状(来自keras代码的翻译)

问题描述 投票:0回答:0

我正在尝试用 pytorch 构建一个神经网络。其中大部分来自使用 tensorflow 编写的代码。我正在将我的代码从 tensorflow 翻译为在 pytorch 中使用它。但有些事情是不对的。这是代码的相关部分:

    self.initializer = nn.init.normal_

    ops = nn.Parameter(torch.empty(h_size, h_size, num_points * 2))
    inputs = torch.empty((0, 1296), requires_grad=True)

    self.x = nn.Sequential(
        nn.Linear(num_points, 16 * 16 * 2, bias=False),
        nn.LeakyReLU(),
        #nn.Reshape((16, 16, 2)),
    )

    self.conv_transpose_1 = nn.Sequential(
        nn.ConvTranspose2d(2, 64, kernel_size=4, stride=2, padding=1, bias=False),
        nn.InstanceNorm2d(64),
        nn.LeakyReLU(),
    )

    self.conv_transpose_2 = nn.Sequential(
        nn.ConvTranspose2d(64, 64, kernel_size=4, stride=1, padding=1, bias=False),
        nn.InstanceNorm2d(64),
        nn.LeakyReLU(),
    )

    self.conv_transpose_3 = nn.Sequential(
        nn.ConvTranspose2d(64, 32, kernel_size=4, stride=1, padding=1, bias=False),
    )

    self.conv_transpose_4 = nn.Sequential(
        nn.ConvTranspose2d(32, 2, kernel_size=4, stride=1, padding=1, bias=False),
    )

def forward(self, ops, inputs):
    x = self.x(inputs)
    x = self.conv_transpose_1(x)
    x = self.conv_transpose_2(x)
    x = self.conv_transpose_3(x)
    x = self.conv_transpose_4(x)
    x = self.density_matrix(x)
    complex_ops = convert_to_complex_ops(ops)
    x = self.expectation(complex_ops, x, prefactor)

    return x

在张量流中,操作和输入是:

ops = tf.keras.layers.Input(
    shape=[h_size, h_size, num_points * 2], name="ops"
)
inputs = tf.keras.Input(shape=(num_points), name="inputs")

ops
inputs
是实例化条件 GAN 的输入。

在Torch中,forarwd函数是:

def forward(self, ops, inputs):
    x = self.x(inputs)
    x = self.conv_transpose_1(x)
    x = self.conv_transpose_2(x)
    x = self.conv_transpose_3(x)
    x = self.conv_transpose_4(x)
    x = self.density_matrix(x)
    complex_ops = convert_to_complex_ops(ops)
    prefactor = 1.0
    x = self.expectation(complex_ops, x, prefactor)
    x = self.noise(x)

但是我在运行的时候出现如下错误

x=conv_transpose_1(x)

RuntimeError: Expected 3D (unbatched) or 4D (batched) input to conv_transpose2d,
but got input of size: [0, 512]

在 Tensorflow 中,

ops
Inputs
的形状为:

In [63]: ops.shape
Out[63]: TensorShape([None, 16, 16, 2592])

In [64]: inputs.shape
Out[64]: TensorShape([None, 1296])

但是 Pytorch 的“翻译”是错误的,我相信这是关于我如何首先定义操作和输入的。我知道 Torch 的

nn.Conv2d
需要形状为
[batch_size, channels, height, width]
的输入,但我不知道如何让它正确。这是我为了理解 Pytorch 而给自己做的练习,但我一直坚持下去。

有什么帮助吗???

编辑:原始的 tensorflow 代码,工作正常是:

initializer = tf.random_normal_initializer(0.0, 0.02)

ops = tf.keras.layers.Input(
    shape=[h_size, h_size, num_points * 2], name="opS"
)
inputs = tf.keras.Input(shape=(num_points), name="inputs")

x = tf.keras.layers.Dense(
    16 * 16 * 2,
    use_bias=False,
    kernel_initializer=tf.random_normal_initializer(0.0, 0.02),
)(inputs)
x = tf.keras.layers.LeakyReLU()(x)
x = tf.keras.layers.Reshape((16, 16, 2))(x)

x = tf.keras.layers.Conv2DTranspose(
    64, 4, use_bias=False, strides=1, padding="same", kernel_initializer=initializer
)(x)
x = tfa.layers.InstanceNormalization(axis=3)(x)
x = tf.keras.layers.LeakyReLU()(x)
x = tf.keras.layers.Conv2DTranspose(
    64, 4, use_bias=False, strides=1, padding="same", kernel_initializer=initializer
)(x)
x = tfa.layers.InstanceNormalization(axis=3)(x)
x = tf.keras.layers.LeakyReLU()(x)
x = tf.keras.layers.Conv2DTranspose(
    32, 4, use_bias=False, strides=1, padding="same", kernel_initializer=initializer
)(x)

x = tf.keras.layers.Conv2DTranspose(
    2, 4, use_bias=False, strides=1, padding="same", kernel_initializer=initializer
)(x)
complex_ops = convert_to_complex_ops(ops)
prefactor = 1.0
x = Expectation()(complex_ops, x, prefactor)
x = tf.keras.layers.GaussianNoise(noise)(x)

return tf.keras.Model(inputs=[ops, inputs], outputs=x)
python tensorflow torch
© www.soinside.com 2019 - 2024. All rights reserved.