mindspore.experimental.optim.lr_scheduler.CosineAnnealingLR

class mindspore.experimental.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max, eta_min=0.0, last_epoch=- 1)[source]

Set the learning rate of each parameter group using a cosine annealing lr schedule. Where ηmax is set to the initial lr, ηmin is the minimum value for learning rate, ηt is the current learning rate, Tmax is iteration number of cosine function, and Tcur is the number of epochs since the last restart in SGDR.

ηt=ηmin+12(ηmaxηmin)(1+cos(TcurTmaxπ)),Tcur(2k+1)Tmax;ηt+1=ηt+12(ηmaxηmin)(1cos(1Tmaxπ)),Tcur=(2k+1)Tmax.

For more details, please refer to: SGDR: Stochastic Gradient Descent with Warm Restarts .

Warning

This is an experimental lr scheduler module that is subject to change. This module must be used with optimizers in Experimental Optimizer .

Parameters
  • optimizer (mindspore.experimental.optim.Optimizer) – Wrapped optimizer.

  • T_max (int) – Maximum number of iterations.

  • eta_min (float, optional) – Minimum learning rate. Default: 0.0.

  • last_epoch (int, optional) – The index of the last epoch. Default: -1.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore.experimental import optim
>>> from mindspore import nn
>>> net = nn.Dense(3, 2)
>>> optimizer = optim.SGD(net.trainable_params(), lr=0.1, momentum=0.9)
>>> scheduler = optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=2)
>>>
>>> for i in range(6):
...     scheduler.step()
...     current_lr = scheduler.get_last_lr()
...     print(current_lr)
[Tensor(shape=[], dtype=Float32, value= 0.05)]
[Tensor(shape=[], dtype=Float32, value= 0)]
[Tensor(shape=[], dtype=Float32, value= 0.05)]
[Tensor(shape=[], dtype=Float32, value= 0.1)]
[Tensor(shape=[], dtype=Float32, value= 0.05)]
[Tensor(shape=[], dtype=Float32, value= 0)]