mindspore.experimental.optim.lr_scheduler.ReduceLROnPlateau
- class mindspore.experimental.optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.1, patience=10, threshold=0.0001, threshold_mode='rel', cooldown=0, min_lr=0, eps=1e-08)[source]
Reduce learning rate when a metric has stopped improving. Models often benefit from reducing the learning rate by a factor of 2-10 once learning stagnates. This scheduler reads a metrics quantity and if no improvement is seen for a ‘patience’ number of epochs, the learning rate is reduced.
Warning
This is an experimental lr scheduler module that is subject to change. This module must be used with optimizers in Experimental Optimizer .
- Parameters
optimizer (
mindspore.experimental.optim.Optimizer
) – Wrapped optimizer.mode (str, optional) – One of min, max. In min mode, lr will be reduced when the quantity monitored has stopped decreasing; in max mode it will be reduced when the quantity monitored has stopped increasing. Default:
'min'
.factor (float, optional) – Factor by which the learning rate will be reduced. Default:
0.1
.patience (int, optional) – Number of epochs with no improvement after which learning rate will be reduced. For example, if patience = 2, then we will ignore the first 2 epochs with no improvement, and will only decrease the LR after the 3rd epoch if the loss still hasn’t improved then. Default:
10
.threshold (float, optional) – Threshold for measuring the new optimum, to only focus on significant changes. Default:
1e-4
.threshold_mode (str, optional) – One of rel, abs. Given dynamic_threshold is the benchmark to define whether the current metric is improvement, in
'rel'
mode, dynamic_threshold = best * ( 1 + threshold ) in'max'
mode or best * ( 1 - threshold ) in'min'
mode. In'abs'
mode, dynamic_threshold = best + threshold in'max'
mode or best - threshold in'min'
mode. Default:'rel'
.cooldown (int, optional) – Number of epochs to wait before resuming normal operation after lr has been reduced. Default:
0
.min_lr (Union(float, list), optional) – A scalar or a list of scalars. A lower bound on the learning rate of all param groups or each group respectively. Default:
0
.eps (float, optional) – Minimal decay applied to lr. If the difference between new and old lr is smaller than eps, the update is ignored. Default:
1e-8
.
- Raises
ValueError – factor is greater than or equal to 1.0.
TypeError – optimizer is not an Optimizer.
ValueError – When min_lr is a list or tuple, the length of it is not equal to the number of param groups.
ValueError – mode is neither
'min'
nor'max'
.ValueError – threshold_mode is neither
'rel'
nor'abs'
.
- Supported Platforms:
Ascend
GPU
CPU
Examples
>>> from mindspore.experimental import optim >>> from mindspore import nn >>> net = nn.Dense(3, 2) >>> optimizer = optim.Adam(net.trainable_params(), 0.1) >>> scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, 'min', patience=0) >>> metrics = [1, 1.5, 1.8, 0.4, 0.5] >>> for i in range(5): ... scheduler.step(metrics[i]) ... current_lr = scheduler.get_last_lr() ... print(current_lr) [Tensor(shape=[], dtype=Float32, value= 0.1)] [Tensor(shape=[], dtype=Float32, value= 0.01)] [Tensor(shape=[], dtype=Float32, value= 0.001)] [Tensor(shape=[], dtype=Float32, value= 0.001)] [Tensor(shape=[], dtype=Float32, value= 0.0001)]