如何找到导致RuntimeError的变量:渐变计算所需的变量之一已被inplace操作修改

问题描述 投票:2回答:1

我正在尝试使用GRUcell层创建一个非常简单的网络来执行以下任务:在两个位置之一中给出一个提示。在T时间步之后,代理必须学习在相对位置采取特定动作。

尝试计算后向渐变时,我收到以下错误:

RuntimeError:渐变计算所需的变量之一已由inplace操作修改。

一个问题是我不完全理解我的代码中的哪一部分正在执行inplace操作。

我已经阅读了stackoverflow和pytorch论坛上的其他帖子,这些帖子都建议使用.clone()操作。我在代码的任何地方都充斥着它,我认为它可能会有所作为,但我没有取得成功。

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.gru    = nn.GRUCell(2,50) # GRU layer taking 2 inputs (L or R), has 50 units
        self.actor  = nn.Linear(50,2)  # Linear Actor layer with 2 outputs, takes GRU as input
        self.critic = nn.Linear(50,1)  # Linear Critic layer with 1 output, takes GRU as input

    def forward(self, s, h):
        h  = self.gru(s,h)  # give the input and previous hidden state to the GRU layer
        c  = self.critic(h) # estimate the value of the current state
        pi = F.softmax(self.actor(h),dim=1) # calculate the policy 
        return (h,c,pi)

    def backward_rollout(self, gamma, R_t, c_t, t):
        R_t[0,t] = gamma*R_t[0,t+1].clone()
        # calculate the reward prediction error
        Delta_t[0,t] = c_t[0,t].clone() - R_t[0,t].clone()

        #calculate the loss for the critic 
        crit = c_t[0,t].clone()
        ret  = R_t[0,t].clone()
        Value_l[0,t] = F.smooth_l1_loss(crit,ret)


###################################
# Run a trial

# parameters
N      = 1   # number of trials to run
T      = 10   # number of time-steps in a trial
gamma  = 0.98 # temporal discount factor

# for each trial
for n in range(N):   
    sample  = np.random.choice([0,1],1)[0] # pick the sample input for this trial
    s_t     = torch.zeros((1,2,T))   # state at each time step

    h_0     = torch.zeros((1,50))    # initial hidden state
    h_t     = torch.zeros((1,50,T))  # hidden state at each time step

    c_t     = torch.zeros((1,T))    # critic at each time step
    pi_t    = torch.zeros((1,2,T))  # policy at each time step

    R_t     = torch.zeros((1,T))  # return at each time step
    Delta_t = torch.zeros((1,T))  # difference between critic and true return at each step
    Value_l = torch.zeros((1,T))  # value loss

    # set the input (state) vector/tensor
    s_t[0,sample,0] = 1.0 # set first time-step stimulus
    s_t[0,0,-1]     = 1.0 # set last time-step stimulus
    s_t[0,1,-1]     = 1.0 # set last time-step stimulus

    # step through the trial
    for t in range(T):  
        # run a forward step
        state = s_t[:,:,t].clone()
        if t is 0:
            (hidden_state, critic, policy) = net(state, h_0)

        else:
            (hidden_state, critic, policy) = net(state, h_t[:,:,t-1])

        h_t[:,:,t]  = hidden_state.clone()
        c_t[:,t]    = critic.clone()
        pi_t[:,:,t] = policy.clone()

    # select an action using the policy
    action = np.random.choice([0,1], 1, p = policy[0,:].detach().numpy()) 
    #action = int(np.random.uniform() < pi[0,1])

    # compare the action to the sample
    if action is sample:
        r = 0
        print("WRONG!")
    else:
        r = 1
        print("RIGHT!")

    #h_t_old = h_t
    #s_t_old = s_t

    # step backwards through the trial to calculate gradients
    R_t[0,-1]     = r
    Delta_t[0,-1] = c_t[0,-1].clone() - r
    Value_l[0,-1] = F.smooth_l1_loss(c_t[0,-1],R_t[0,-1]).clone()

    for t in np.arange(T-2,-1,-1): #backwards rollout 
        net.backward_rollout(gamma, R_t, c_t, t)

    Vl = Value_l.clone().sum()#calculate total loss

    Vl.backward() #calculate the derivatives 
    opt.step() #update the weights
    opt.zero_grad() #zero gradients before next trial
python pytorch
1个回答
0
投票

你可以试试anomaly_detection来确定准确的违规就地操作:https://github.com/pytorch/pytorch/issues/15803

Value_l[0,-1] =和类似的是就地操作。您可以通过执行Value_l.data[0,-1] =来回避检查,但这不会存储在计算图中,可能是个坏主意。相关的讨论在这里:https://discuss.pytorch.org/t/how-to-get-around-in-place-operation-error-if-index-leaf-variable-for-gradient-update/14554

© www.soinside.com 2019 - 2024. All rights reserved.