训练时在 pytorch 中没有出现梯度设置错误

问题描述 投票:0回答:1

RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn 

我在以下训练循环中遇到此错误,梯度必须由顺序本身设置,但它说没有梯度。

"""Training"""
Epochs = 100


for epoch in range(Epochs):
    model.train()

    train_logits = model(X_train)
    train_preds_probs = torch.softmax(train_logits,dim=1).argmax(dim=1).type(torch.float32)
    loss = loss_fn(train_preds_probs,y_train)
    train_accu = accuracy(y_train,train_preds_probs)
    print(train_preds_probs)
    optimiser.zero_grad()

    loss.backward()

    optimiser.step()

    #training
    model.eval()
    with torch.inference_mode():
        test_logits = model(X_test)
        test_preds = torch.softmax(test_logits.type(torch.float32),dim=1).argmax(dim=1)
        test_loss = loss_fn(test_preds,y_train)
        test_acc = accuracy(y_test,test_preds)

    
    if epoch%10 == 0:
        print(f'Epoch:{epoch} | Train loss: {loss} |Taining acc:{train_accu} | Test Loss: {test_loss} | Test accu: {test_acc}')





我尝试上网冲浪,但没有找到解决方案。

如有任何帮助,我们将不胜感激!

python performance deep-learning pytorch ml
1个回答
0
投票

有时,当代码的一部分被 a 包裹时,会引发这种错误。

with torch.no_grad():
我建议检查代码中的各种功能以检查它。
也许您已经检查过了,但这是一个好的开始!

© www.soinside.com 2019 - 2024. All rights reserved.