Adam 优化器在 PyTorch 上进行预热

问题描述 投票:0回答:6

在论文 Attention is all you need 中,在第 5.3 节下,作者建议线性增加学习率,然后按步数的平方根倒数按比例减少。

我们如何使用 Adam 优化器在 PyTorch 中实现这一点?最好没有额外的包。

python machine-learning pytorch
6个回答
15
投票

PyTorch 提供了 learning-rate-schedulers,用于在训练过程中实现调整学习率的各种方法。 一些简单的 LR 调度器已经实现,可以在这里找到:https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate

在您的特殊情况下,您可以 - 就像其他 LR 调度程序一样 - 子类

_LRScheduler
用于根据时期数实现可变调度。对于简单的方法,您只需要实现
__init__()
get_lr()
方法。

请注意,许多调度程序希望您每个时期调用

.step()
一次。但您也可以更频繁地更新它,甚至传递自定义参数,就像在余弦退火 LR 调度程序中一样:https://pytorch.org/docs/stable/_modules/torch/optim/lr_scheduler.html#CosineAnnealingLR


8
投票

正如最后评论中所建议的,我们可以使用https://nlp.seas.harvard.edu/2018/04/03/attention.html#optimizer引入的类。但是这个答案会给出错误,除非我们定义一个函数来更新 state_dict。

这是完整的调度程序:

class NoamOpt:
    "Optim wrapper that implements rate."
    def __init__(self, model_size, warmup, optimizer):
        self.optimizer = optimizer
        self._step = 0
        self.warmup = warmup
        self.model_size = model_size
        self._rate = 0
    
    def state_dict(self):
        """Returns the state of the warmup scheduler as a :class:`dict`.
        It contains an entry for every variable in self.__dict__ which
        is not the optimizer.
        """
        return {key: value for key, value in self.__dict__.items() if key != 'optimizer'}
    
    def load_state_dict(self, state_dict):
        """Loads the warmup scheduler's state.
        Arguments:
            state_dict (dict): warmup scheduler state. Should be an object returned
                from a call to :meth:`state_dict`.
        """
        self.__dict__.update(state_dict) 
        
    def step(self):
        "Update parameters and rate"
        self._step += 1
        rate = self.rate()
        for p in self.optimizer.param_groups:
            p['lr'] = rate
        self._rate = rate
        self.optimizer.step()
        
    def rate(self, step = None):
        "Implement `lrate` above"
        if step is None:
            step = self._step
        return (self.model_size ** (-0.5) *
            min(step ** (-0.5), step * self.warmup ** (-1.5))) 

稍后,在训练循环中使用它:

optimizer = NoamOpt(input_opts['d_model'], 500,
            torch.optim.Adam(model.parameters(), lr=0, betas=(0.9, 0.98), eps=1e-9))

。 。 .

optimizer.step()

8
投票

Cause的NoamOpt提供了实现预热学习率的路径,如下所示 https://nlp.seas.harvard.edu/2018/04/03/attention.html#optimizer。不过有点旧了,出行不太方便。实现这一目标的更智能方法是直接使用 Pytorch 支持的 lambda 学习率调度器

也就是说,你首先定义一个预热函数来自动调整学习率:

def warmup(current_step: int):
if current_step < args.warmup_steps:  # current_step / warmup_steps * base_lr
    return float(current_step / args.warmup_steps)
else:                                 # (num_training_steps - current_step) / (num_training_steps - warmup_steps) * base_lr
    return max(0.0, float(args.training_steps - current_step) / float(max(1, args.training_steps - args.warmup_steps)))

然后构建学习率调度程序并在训练过程中使用它:

lr_scheduler = torch.optim.lr_scheduler.LambdaLR(optimizer, lr_lambda=warmup)

4
投票
class NoamOpt:
"Optim wrapper that implements rate."
def __init__(self, model_size, factor, warmup, optimizer):
    self.optimizer = optimizer
    self._step = 0
    self.warmup = warmup
    self.factor = factor
    self.model_size = model_size
    self._rate = 0
    
def step(self):
    "Update parameters and rate"
    self._step += 1
    rate = self.rate()
    for p in self.optimizer.param_groups:
        p['lr'] = rate
    self._rate = rate
    self.optimizer.step()
    
def rate(self, step = None):
    "Implement `lrate` above"
    if step is None:
        step = self._step
    return self.factor * \
        (self.model_size ** (-0.5) *
        min(step ** (-0.5), step * self.warmup ** (-1.5)))
    
def get_std_opt(model):
    return NoamOpt(model.src_embed[0].d_model, 2, 4000,torch.optim.Adam(model.parameters(), lr=0, betas=(0.9, 0.98), eps=1e-9))

如:https://nlp.seas.harvard.edu/2018/04/03/attention.html#optimizer


3
投票

这是基于 Fang Wu 的答案构建的,但它是对数的,允许您使用内置函数作为主调度程序。 CosineAnnealingLR 可以替换为您想要的任何调度程序。

train_scheduler = CosineAnnealingLR(optimizer, num_epochs)

def warmup(current_step: int):
    return 1 / (10 ** (float(number_warmup_epochs - current_step)))
warmup_scheduler = LambdaLR(optimizer, lr_lambda=warmup)

scheduler = SequentialLR(optimizer, [warmup_scheduler, train_scheduler], [number_warmup_epochs])

0
投票

通过考虑下面纸上的图像,我可以将我的公式表述为

self.model_size ** (-0.5) * min(self.current_step ** (-0.5), self.current_step * self.warmup_steps ** (-1.5))

class AdamWarmup:

def __init__(self, model_size, warmup_steps, optimizer):
    
    self.model_size = model_size
    self.warmup_steps = warmup_steps
    self.optimizer = optimizer
    self.current_step = 0
    self.lr = 0
    
def get_lr(self):
    return self.model_size ** (-0.5) * min(self.current_step ** (-0.5), self.current_step * self.warmup_steps ** (-1.5))
    
def step(self):
    # Increment the number of steps each time we call the step function
    self.current_step += 1
    lr = self.get_lr()
    for param_group in self.optimizer.param_groups:
        param_group['lr'] = lr
    # update the learning rate
    self.lr = lr
    self.optimizer.step() 
adam_optimizer = torch.optim.Adam(transformer.parameters(), lr=0, betas=(0.9, 0.98), eps=1e-9)
modifie_optimizer = AdamWarmup(model_size = d_model, warmup_steps = 4000, optimizer = adam_optimizer)


for i, (image, labels) in enumerate(train_loader):
    ...
    modified_optimizer.optimizer.zero_grad()
    loss.backward()
    modified_optimizer.step()
    ...
© www.soinside.com 2019 - 2024. All rights reserved.