海纳自动中心

问题描述 投票:0回答:1

我正在尝试编写一个用于压缩13张图像的vanilla autoencoder。但是我收到以下错误:

ValueError:不再支持train参数。使用chainer.using_config

图像的形状是(21,28,3)。

filelist = 'ex1.png', 'ex2.png',...11 other images
x = np.array([np.array(Image.open(fname)) for fname in filelist])
xs = x.astype('float32')/255.

class Autoencoder(Chain):
  def __init__(self, activation=F.relu):
    super().__init__()
    with self.init_scope():
  # encoder part
      self.l1 = L.Linear(1764,800)
      self.l2 = L.Linear(800,300)
  # decoder part
      self.l3 = L.Linear(300,800)
      self.l4 = L.Linear(800,1764)
      self.activation = activation

  def forward(self,x):
      h = self.encode(x)
      x_recon = self.decode(h)
      return x_recon

  def __call__(self,x):
      x_recon = self.forward(x)
      loss = F.mean_squared_error(h, x)
      return loss

  def encode(self, x, train=True):
      h = F.dropout(self.activation(self.l1(x)), train=train)
      return self.activation(self.l2(x))

  def decode(self, h, train=True):
      h = self.activation(self.l3(h))
      return self.l4(x)

n_epoch = 5
batch_size = 2
model = Autoencoder()

optimizer = optimizers.SGD(lr=0.05).setup(model)
train_iter = iterators.SerialIterator(xs,batch_size)
valid_iter = iterators.SerialIterator(xs,batch_size)

updater = training.StandardUpdater(train_iter,optimizer)
trainer = training.Trainer(updater,(n_epoch,"epoch"),out="result")

from chainer.training import extensions
trainer.extend(extensions.Evaluator(valid_iter, model, device=gpu_id))

trainer.run()

问题是因为模型中的节点数量还是其他?

image-processing autoencoder chainer
1个回答
2
投票

你需要写“解码”部分。

当你采取mean_squared_error损失时,hx的形状必须相同。 AutoEncoder将原始的x编码为小空​​间(100-dim)h,但之后我们需要通过添加解码器部分从这个x'重建h。然后可以在这个重建的x'上计算损失。

例如,如下(对不起,我没有测试它运行)

  • 对于海纳β2~

train参数由global configs处理,所以你在dropout函数中不需要train参数。

class Autoencoder(Chain):
  def __init__(self, activation=F.relu):
    super().__init__()
    with self.init_scope():
      # encoder part
      self.l1 = L.Linear(1308608,500)
      self.l2 = L.Linear(500,100)
      # decoder part
      self.l3 = L.Linear(100,500)
      self.l4 = L.Linear(500,1308608)
  self.activation = activation

  def forward(self,x):
      h = self.encode(x)
      x_recon = self.decode(h)
      return x_recon

  def __call__(self,x):
      x_recon = self.forward(x)
      loss = F.mean_squared_error(h, x)
      return loss

  def encode(self, x):
      h = F.dropout(self.activation(self.l1(x)))
      return self.activation(self.l2(x))

  def decode(self, h, train=True):
      h = self.activation(self.l3(h))
      return self.l4(x)
  • 对于海纳β1
class Autoencoder(Chain):
  def __init__(self, activation=F.relu):
    super().__init__()
    with self.init_scope():
      # encoder part
      self.l1 = L.Linear(1308608,500)
      self.l2 = L.Linear(500,100)
      # decoder part
      self.l3 = L.Linear(100,500)
      self.l4 = L.Linear(500,1308608)
  self.activation = activation

  def forward(self,x):
      h = self.encode(x)
      x_recon = self.decode(h)
      return x_recon

  def __call__(self,x):
      x_recon = self.forward(x)
      loss = F.mean_squared_error(h, x)
      return loss

  def encode(self, x, train=True):
      h = F.dropout(self.activation(self.l1(x)), train=train)
      return self.activation(self.l2(x))

  def decode(self, h, train=True):
      h = self.activation(self.l3(h))
      return self.l4(x)

您还可以参考官方的变分自动编码器示例进行下一步:

© www.soinside.com 2019 - 2024. All rights reserved.