mindspore.train.ReduceLROnPlateau

class mindspore.train.ReduceLROnPlateau(monitor='eval_loss', factor=0.1, patience=10, verbose=False, mode='auto', min_delta=0.0001, cooldown=0, min_lr=0)[source]

Reduce learning rate when the monitor has stopped improving.

Models often benefit from reducing the learning rate by a factor of 2-10 once learning stagnates. This callback monitors the training process and if no improvement is seen for a ‘patience’ number of epochs, the learning rate is reduced.

Note

Learning rate grouping is not supported now.

Parameters
  • monitor (str) – quantity to be monitored. If evaluation is performed on the end of train epochs, the valid monitors can be “loss”, “eval_loss” or metric names passed when instantiate the Model; otherwise the valid monitor is “loss”. When monitor is “loss”, if train network has multiple outputs, the first element will be returned as training loss.

  • factor (float) – factor by which the learning rate will be reduced. new_lr = lr * factor. Default: 0.1.

  • patience (int) – monitor value is better than history best value over min_delta is seen as improvement, patience is number of epochs with no improvement that would be waited. When the waiting counter self.wait is larger than or equal to patience, the lr will be reduced. Default: 10.

  • verbose (bool) – If False: quiet, if True: print related information. Default: False.

  • mode (str) – one of {‘auto’, ‘min’, ‘max’}. In “min” mode, the learning rate will be reduced when the quantity monitored has stopped decreasing; in “max” mode it will be reduced when the quantity monitored has stopped increasing; in “auto” mode, the direction is automatically inferred from the name of the monitored quantity. Default: “auto”.

  • min_delta (float) – threshold for measuring the new optimum, to only focus on significant changes. Default: 1e-4.

  • cooldown (int) – number of epochs to wait before resuming normal operation after lr has been reduced. Default: 0.

  • min_lr (float) – lower bound on the learning rate. Default: 0.

Raises
  • ValueErrormode not in ‘auto’, ‘min’ or ‘max’.

  • ValueError – The monitor value is not a scalar.

  • ValueError – The learning rate is not a Parameter.

Examples

Note

Before running the following example, you need to customize the network LeNet5 and dataset preparation function create_dataset. Refer to Building a Network and Dataset .

>>> from mindspore import nn
>>> from mindspore.train import Model, ReduceLROnPlateau
>>> net = LeNet5()
>>> loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean')
>>> optim = nn.Momentum(net.trainable_params(), 0.01, 0.9)
>>> model = Model(net, loss_fn=loss, optimizer=optim, metrics={"acc"})
>>> data_path = './MNIST_Data'
>>> dataset = create_dataset(data_path)
>>> cb = ReduceLROnPlateau(monitor="acc", patience=3, verbose=True)
>>> model.fit(10, dataset, callbacks=cb)
on_train_begin(run_context)[source]

Initialize variables at the begin of training.

Parameters

run_context (RunContext) – Context information of the model. For more details, please refer to mindspore.train.RunContext.

on_train_epoch_end(run_context)[source]

monitors the training process and if no improvement is seen for a ‘patience’ number of epochs, the learning rate is reduced.

Parameters

run_context (RunContext) – Context information of the model. For more details, please refer to mindspore.train.RunContext.