mindspore.experimental.optim.lr_scheduler.LinearLR

查看源文件
class mindspore.experimental.optim.lr_scheduler.LinearLR(optimizer, start_factor=1.0 / 3, end_factor=1.0, total_iters=5, last_epoch=- 1)[源代码]

线性减小学习率乘法因子 ,并将每个参数组的学习率按照此乘法因子进行衰减,直到 last_epoch 数达到 total_iters。注意,这种衰减可能与外部对于学习率的改变同时发生。

警告

这是一个实验性的动态学习率接口,需要和 mindspore.experimental.optim 下的接口配合使用。

参数:
  • optimizer (mindspore.experimental.optim.Optimizer) - 优化器实例。

  • start_factor (float,可选) - 初始的乘法因子值,后续向 end_factor 进行线性变化。默认值: 1.0 /3

  • end_factor (float,可选) - 线性变化过程结束时的乘法因子值。默认值: 1.0

  • total_iters (int,可选) - 迭代的次数。默认值: 5

  • last_epoch (int,可选) - 当前scheduler的 step() 方法的执行次数。默认值: -1

异常:
  • ValueError - start_factor 不在(0, 1]范围内。

  • ValueError - end_factor 不在[0, 1]范围内。

支持平台:

Ascend GPU CPU

样例:

>>> import mindspore
>>> from mindspore import nn
>>> from mindspore.experimental import optim
>>> # Define the network structure of LeNet5. Refer to
>>> # https://gitee.com/mindspore/docs/blob/r2.2/docs/mindspore/code/lenet.py
>>> net = LeNet5()
>>> loss_fn = nn.SoftmaxCrossEntropyWithLogits(sparse=True)
>>> optimizer = optim.Adam(net.trainable_params(), lr=0.05)
>>> # Assuming optimizer uses lr = 0.05 for all groups
>>> # lr = 0.025    if epoch == 0
>>> # lr = 0.03125  if epoch == 1
>>> # lr = 0.0375   if epoch == 2
>>> # lr = 0.04375  if epoch == 3
>>> # lr = 0.05    if epoch >= 4
>>> scheduler = optim.lr_scheduler.LinearLR(optimizer, start_factor=0.5, total_iters=4)
>>> def forward_fn(data, label):
...     logits = net(data)
...     loss = loss_fn(logits, label)
...     return loss, logits
>>> grad_fn = mindspore.value_and_grad(forward_fn, None, optimizer.parameters, has_aux=True)
>>> def train_step(data, label):
...     (loss, _), grads = grad_fn(data, label)
...     optimizer(grads)
...     return loss
>>> for epoch in range(5):
...     # Create the dataset taking MNIST as an example. Refer to
...     # https://gitee.com/mindspore/docs/blob/r2.2/docs/mindspore/code/mnist.py
...     for data, label in create_dataset():
...         train_step(data, label)
...     scheduler.step()
...     current_lr = scheduler.get_last_lr()