mindspore.nn.LinearLR
- class mindspore.nn.LinearLR(optimizer, start_factor=1.0 / 3, end_factor=1.0, total_iters=5, last_epoch=- 1, verbose=False)[源代码]
线性改变用于衰减参数组学习率的乘法因子,直到 last_epoch 数达到预定义的阈值 total_iters。 LinearLR 对于学习率的衰减可能与外部对于学习率的改变同时发生。
警告
这是一个实验性的动态学习率接口,需要和 实验性优化器 下的接口配合使用。
- 参数:
optimizer (
mindspore.nn.optim_ex.Optimizer
) - 优化器实例。start_factor (float,可选) - 初始的乘法因子值,后续向 end_factor 进行线性变化。默认值:
1.0 /3
。end_factor (float,可选) - 线性变化过程结束时的乘法因子值。默认值:
1.0
。total_iters (int,可选) - 迭代的次数。默认值:
5
。last_epoch (int,可选) - epoch/step数。默认值:
-1
。verbose (bool,可选) - 是否打印学习率。默认值:
False
。
- 异常:
ValueError - start_factor 不在(0, 1]范围内。
ValueError - end_factor 不在[0, 1]范围内。
- 支持平台:
Ascend
GPU
CPU
样例:
>>> import mindspore >>> from mindspore.nn import LinearLR >>> from mindspore import nn >>> # Define the network structure of LeNet5. Refer to >>> # https://gitee.com/mindspore/docs/blob/r2.1/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> loss_fn = nn.SoftmaxCrossEntropyWithLogits(sparse=True) >>> optimizer = nn.optim_ex.Adam(net.trainable_params(), lr=0.05) >>> # Assuming optimizer uses lr = 0.05 for all groups >>> # lr = 0.025 if epoch == 0 >>> # lr = 0.03125 if epoch == 1 >>> # lr = 0.0375 if epoch == 2 >>> # lr = 0.04375 if epoch == 3 >>> # lr = 0.05 if epoch >= 4 >>> scheduler = LinearLR(optimizer, start_factor=0.5, total_iters=4) >>> def forward_fn(data, label): ... logits = net(data) ... loss = loss_fn(logits, label) ... return loss, logits >>> grad_fn = mindspore.value_and_grad(forward_fn, None, optimizer.parameters, has_aux=True) >>> def train_step(data, label): ... (loss, _), grads = grad_fn(data, label) ... optimizer(grads) ... return loss >>> for epoch in range(5): ... # Create the dataset taking MNIST as an example. Refer to ... # https://gitee.com/mindspore/docs/blob/r2.1/docs/mindspore/code/mnist.py ... for data, label in create_dataset(): ... train_step(data, label) ... scheduler.step() ... current_lr = scheduler.get_last_lr()