mindspore.experimental.optim.lr_scheduler.ChainedScheduler

View Source On Gitee
class mindspore.experimental.optim.lr_scheduler.ChainedScheduler(schedulers)[source]

Save the learning rate scheduler chain list of multiple learning rate schedulers, and call the step() function to execute the step() function of each learning rate scheduler.

Warning

This is an experimental lr scheduler module that is subject to change. This module must be used with optimizers in Experimental Optimizer .

Parameters

schedulers (list[mindspore.experimental.optim.lr_scheduler.LRScheduler]) – List of learning rate schedulers.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import nn
>>> from mindspore.experimental import optim
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.fc = nn.Dense(16 * 5 * 5, 120)
...     def construct(self, x):
...         return self.fc(x)
>>> net = Net()
>>> optimizer = optim.Adam(net.trainable_params(), 0.01)
>>> scheduler1 = optim.lr_scheduler.PolynomialLR(optimizer)
>>> scheduler2 = optim.lr_scheduler.ExponentialLR(optimizer, gamma=0.5)
>>> scheduler = optim.lr_scheduler.ChainedScheduler([scheduler1, scheduler2])
>>> for i in range(6):
...     scheduler.step()
...     current_lr = scheduler.get_last_lr()
...     print(current_lr)
[Tensor(shape=[], dtype=Float32, value= 0.004)]
[Tensor(shape=[], dtype=Float32, value= 0.0015)]
[Tensor(shape=[], dtype=Float32, value= 0.0005)]
[Tensor(shape=[], dtype=Float32, value= 0.000125)]
[Tensor(shape=[], dtype=Float32, value= 0)]
[Tensor(shape=[], dtype=Float32, value= 0)]
get_last_lr()[source]

Return last computed learning rate by current scheduler.

step()[source]

Sequential execution of the saved learning rate scheduler’s step() function.