mindformers.core.CosineAnnealingLR

View Source On Gitee
class mindformers.core.CosineAnnealingLR(base_lr: float, t_max: int, eta_min: float = 0., **kwargs)[source]

It has been proposed in SGDR: Stochastic Gradient Descent with Warm Restarts . Note that this only implements the cosine annealing part of SGDR, and not the restarts.

Set the learning rate of each parameter group using a cosine annealing schedule, where \(\eta_{max}\) is set to the initial lr and \(T_{cur}\) is the number of epochs since the last restart in SGDR:

\[\begin{split}\begin{aligned} \eta_t & = \eta_{min} + \frac{1}{2}(\eta_{max} - \eta_{min})\left(1 + \cos\left(\frac{T_{cur}}{T_{max}}\pi\right)\right), & T_{cur} \neq (2k+1)T_{max}; \\ \eta_{t+1} & = \eta_{t} + \frac{1}{2}(\eta_{max} - \eta_{min}) \left(1 - \cos\left(\frac{1}{T_{max}}\pi\right)\right), & T_{cur} = (2k+1)T_{max}. \end{aligned}\end{split}\]

When last_epoch=-1, sets initial lr as lr. Notice that because the schedule is defined recursively, the learning rate can be simultaneously modified outside this scheduler by other operators. If the learning rate is set solely by this scheduler, the learning rate at each step becomes:

\[\eta_t = \eta_{min} + \frac{1}{2}(\eta_{max} - \eta_{min})\left(1 + \cos\left(\frac{T_{cur}}{T_{max}}\pi\right)\right)\]
Parameters
  • base_lr (float) – Initial value of learning rate.

  • t_max (int) – Maximum number of iterations.

  • eta_min (float) – Minimum learning rate. Default: 0.

Inputs:
  • global_step (int) - The global step.

Outputs:

Learning rate.

Examples

>>> import mindspore as ms
>>> from mindformers.core import CosineAnnealingLR
>>>
>>> ms.set_context(mode=ms.GRAPH_MODE)
>>> base_lr = 0.005
>>> t_max = 10
>>> eta_min = 0.0000001
>>>
>>> cosine_annealing = CosineAnnealingLR(base_lr=base_lr, t_max=t_max, eta_min=eta_min)
>>> print(cosine_annealing(1))
0.0048776437
>>> print(cosine_annealing(15))
0.0025000498