mindspore.amp.DynamicLossScaleManager

class mindspore.amp.DynamicLossScaleManager(init_loss_scale=2 ** 24, scale_factor=2, scale_window=2000)[source]

Loss scale(Magnification factor of gradients when mix precision is used) manager with loss scale dynamically adjusted, inherits from mindspore.amp.LossScaleManager.

Parameters
  • init_loss_scale (float) – Initialize loss scale. Default: 2 ** 24 .

  • scale_factor (int) – Coefficient of increase and decrease. Default: 2 .

  • scale_window (int) – Maximum continuous normal steps when there is no overflow. Default: 2000 .

Supported Platforms:

Ascend GPU

Examples

>>> import mindspore as ms
>>> from mindspore import amp, nn
>>>
>>> # Define the network structure of LeNet5. Refer to
>>> # https://gitee.com/mindspore/docs/blob/r2.1/docs/mindspore/code/lenet.py
>>> net = LeNet5()
>>> loss_scale_manager = amp.DynamicLossScaleManager()
>>> optim = nn.Momentum(params=net.trainable_params(), learning_rate=0.1, momentum=0.9)
>>> model = ms.Model(net, loss_scale_manager=loss_scale_manager, optimizer=optim)
get_drop_overflow_update()[source]

Whether to drop optimizer update for current step when there is an overflow.

Returns

bool, always True.

get_loss_scale()[source]

Get the current loss scale value.

Returns

float, loss_scale value.

get_update_cell()[source]

Returns the instance of mindspore.nn.Cell that is used to update the loss scale which will be called at mindspore.nn.TrainOneStepWithLossScaleCell.

Returns

mindspore.nn.DynamicLossScaleUpdateCell.

update_loss_scale(overflow)[source]

Update the loss scale value according to the status of overflow. If overflow occurs, decrease loss scale per scale_window, otherwise, increase the loss scale.

Parameters

overflow (bool) – Whether it overflows.