mindspore.amp.DynamicLossScaler
- class mindspore.amp.DynamicLossScaler(scale_value, scale_factor, scale_window)[source]
Dynamic Loss scale class.
Dynamic loss scaling tries to determine the largest loss scale value that will keep gradients finite. It does this by increasing the loss scale every scale_window steps by factor if the grads remain finite, otherwise it reduces the loss scale by 1 / factor and resets the counter.
Warning
This is an experimental API that is subject to change or deletion.
- Parameters
- Supported Platforms:
Ascend
GPU
CPU
Examples
>>> import mindspore >>> from mindspore import amp, Tensor >>> import numpy as np >>> loss_scaler = amp.DynamicLossScaler(scale_value=2**10, scale_factor=2, scale_window=1) >>> grads = (Tensor(np.array([np.log(-1), 1.0]), mindspore.float16), ... Tensor(np.array([0.2]), mindspore.float16)) >>> unscaled_grads = loss_scaler.unscale(grads) >>> grads_finite = amp.all_finite(unscaled_grads) >>> loss_scaler.adjust(grads_finite) True >>> print(loss_scaler.scale_value.asnumpy()) 512.0
- adjust(grads_finite)[source]
Adjust the scale_value dependent on whether grads are finite.
- Parameters
grads_finite (Tensor) – a scalar bool Tensor indicating whether the grads are finite.
- Tutorial Examples: