Pytorch,`back` RuntimeError:尝试第二次向后遍历图形,但缓冲区已经被释放

问题描述 投票:2回答:2

我正在用PyTorch(0.4)实现DDPG并且卡住了支持损失。所以,首先我的代码执行更新:

def update_nets(self, transitions):
    """
    Performs one update step
    :param transitions: list of sampled transitions
    """
    # get batches
    batch = transition(*zip(*transitions))
    states = torch.stack(batch.state)
    actions = torch.stack(batch.action)
    next_states = torch.stack(batch.next_state)
    rewards = torch.stack(batch.reward)

    # zero gradients
    self._critic.zero_grad()

    # compute critic's loss
    y = rewards.view(-1, 1) + self._gamma * \
        self.critic_target(next_states, self.actor_target(next_states))

    loss_critic = F.mse_loss(y, self._critic(states, actions),
                             size_average=True)

    # backpropagte it
    loss_critic.backward()
    self._optim_critic.step()

    # zero gradients
    self._actor.zero_grad()

    # compute actor's loss
    loss_actor = ((-1.) * self._critic(states, self._actor(states))).mean()

    # backpropagate it
    loss_actor.backward()
    self._optim_actor.step()

    # do soft updates
    self.perform_soft_update(self.actor_target, self._actor)
    self.perform_soft_update(self.critic_target, self._critic)

qazxsw poi,qazxsw poi,qazxsw poi和qazxsw poi是网队。

如果我运行它,我在第二次迭代中得到以下错误:

self._actor

self._crtic

我不知道是什么导致了它。

我所知道的是,self.actor_target调用会导致错误。我已经调试了self.critic_target - 它有一个有效的值。如果我用简单的方法替换损失计算

RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time.

Tensor包含值line 221, in update_nets loss_critic.backward() line 93, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) line 89, in backward allow_unreachable=True) # allow_unreachable flag 它工作正常。此外,我已经检查过我没有保存一些可能导致错误的结果。另外用loss_critic.backward()更新actor不会导致任何问题。

有谁知道这里出了什么问题?

谢谢!

更新

我换了

loss_critic

loss_critic = torch.tensor(1., device=self._device, dtype=torch.float, requires_grad=True)

1

(两个电话)但它仍然失败并出现同样的错误。此外,代码在一次迭代结束时执行更新

loss_actor
neural-network pytorch backpropagation reinforcement-learning loss
2个回答
2
投票

我找到了解决方案..为了训练的目的,我在我的# zero gradients self._critic.zero_grad() 中保存了张量,我在每次迭代中都使用了张量

# zero gradients
    self._actor.zero_grad()

代码段。这种“保存”张量是导致问题的原因。所以我改变了我的代码只保存数据(qazxsw poi),只在需要时才把它放到张量中。

更详细:在DDPG中,我每次迭代都会评估策略,并使用批处理执行一个学习步骤。现在我通过以下方式将评估保存在重放缓冲区中:

# zero gradients
self._critic.zero_grad()
self._actor.zero_grad()
self.critic_target.zero_grad()
self.actor_target.zero_grad()

并使用它像:

def perform_soft_update(self, target, trained):
    """
    Preforms the soft update
    :param target: Net to be updated
    :param trained: Trained net - used for update
    """
    for param_target, param_trained in \
            zip(target.parameters(), trained.parameters()):
        param_target.data.copy_(
            param_target.data * (
                    1.0 - self._tau) + param_trained * self._tau
        )

0
投票

没有在replay_buffer# get batches batch = transition(*zip(*transitions)) states = torch.stack(batch.state) actions = torch.stack(batch.action) next_states = torch.stack(batch.next_state) rewards = torch.stack(batch.reward) 上打电话给tensor.data.numpy().tolist()?还是在action = self.action(state) ... self.replay_buffer.push(state.data.numpy().tolist(), action.data.numpy().tolist(), ...) 中调用?

© www.soinside.com 2019 - 2024. All rights reserved.