DequantizeAndLinearBackward 的输出 0 是一个视图,正在就地修改。该视图是在自定义函数和自动网格内创建的

问题描述 投票:0回答:1

我正在尝试微调 GPT J,但出现此错误。我认为它与激活函数有关并且它就位,但我不知道如何编码来修复它。

是否是激活函数中需要禁用的参数?如果是的话,是哪一个?

提前感谢您的帮助!

 output = DequantizeAndLinear.apply(input, self.weight, self.absmax, self.code, self.bias)
     14         if self.adapter:
---> 15             output += self.adapter(input)
     16         return output
     17 

RuntimeError: Output 0 of DequantizeAndLinearBackward is a view and is being modified in-place. This view was created inside a custom Function (or because an input was returned as-is) and the autograd logic to handle view+inplace would override the custom backward associated with the custom Function, leading to incorrect gradients. This behavior is forbidden. You can fix this by cloning the output of the custom Function.
   def forward(self, input):
        output = DequantizeAndLinear.apply(input, self.weight, self.absmax, self.code, self.bias)
        if self.adapter:
            output += self.adapter(input)
        return output
 
    @classmethod
    def from_linear(cls, linear: nn.Linear) -> "FrozenBNBLinear":
        weights_int8, state = quantize_blockise_lowmemory(linear.weight)
        return cls(weights_int8, *state, linear.bias)
 
    def __repr__(self):
        return f"{self.__class__.__name__}({self.in_features}, {self.out_features})"
 
 
class DequantizeAndLinear(torch.autograd.Function): 
    @staticmethod
    @custom_fwd
    def forward(ctx, input: torch.Tensor, weights_quantized: torch.ByteTensor,
                absmax: torch.FloatTensor, code: torch.FloatTensor, bias: torch.FloatTensor):
        weights_deq = dequantize_blockwise(weights_quantized, absmax=absmax, code=code)
        ctx.save_for_backward(input, weights_quantized, absmax, code)
        ctx._has_bias = bias is not None
        return F.linear(input, weights_deq, bias)
 
    @staticmethod
    @custom_bwd
    def backward(ctx, grad_output: torch.Tensor):
        assert not ctx.needs_input_grad[1] and not ctx.needs_input_grad[2] and not ctx.needs_input_grad[3]
        input, weights_quantized, absmax, code = ctx.saved_tensors
        # grad_output: [*batch, out_features]
        weights_deq = dequantize_blockwise(weights_quantized, absmax=absmax, code=code)
        grad_input = grad_output @ weights_deq
        grad_bias = grad_output.flatten(0, -2).sum(dim=0) if ctx._has_bias else None
        return grad_input, None, None, None, grad_bias
 
python torch autograd activation-function llm
1个回答
1
投票

您只需将 .clone() 添加到您的激活函数中。 在这里,它是:

 F.linear(input, weights_deq, bias).clone()
© www.soinside.com 2019 - 2024. All rights reserved.