进行拉格朗日双重优化PyTorch的正确方法?

问题描述 投票:0回答:1

我正在尝试使用 PyTorch 优化一个简单的对偶线性程序。这是我的代码,但在第一个循环后向后传递时我不断收到错误:

c_t = torch.tensor(c).float()
A_t = torch.tensor(A).float()
b_t = torch.tensor(b).float()
x_t = torch.rand(n, 1, requires_grad=True)

def max_grad(grad):
    return -grad

_lagrange_multiplier = torch.rand(m, requires_grad=True)
_lagrange_multiplier.register_hook(max_grad) # because we maximize wrt lambda
lagrange_multiplier = torch.nn.functional.softplus(_lagrange_multiplier)

opt_weights = torch.optim.Adam([x_t], lr=0.1)
opt_lagrange = torch.optim.Adam([_lagrange_multiplier], lr=0.1)

for i in range(10):
    print(i)
    opt_weights.zero_grad()
    opt_lagrange.zero_grad()

    objective = c_t.T @ x_t
    constraint = (A_t @ x_t).squeeze() - b_t
    lagrangian = objective + lagrange_multiplier.T @ constraint

    lagrangian.backward()

    opt_weights.step()
    opt_lagrange.step()

但是,我不断收到以下错误:

RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.

我以为既然我是零级,我就可以回想起来吗?

感谢您的帮助。

python pytorch linear-programming
1个回答
0
投票

发生这种情况是因为在对 _ lagrange_multiplier 应用梯度更新后,我没有重新计算变量 lagrange_multiplier。更正后的 for 循环是:

for i in range(10):
    print(i)
    opt_weights.zero_grad()
    opt_lagrange.zero_grad()

    objective = c_t.T @ x_t
    constraint = (A_t @ x_t).squeeze() - b_t
    lagrange_multiplier = torch.nn.functional.softplus(_lagrange_multiplier)
    lagrangian = objective + lagrange_multiplier.T @ constraint

    lagrangian.backward()

    opt_weights.step()
    opt_lagrange.step()
© www.soinside.com 2019 - 2024. All rights reserved.