mindspore.amp.DynamicLossScaler
- class mindspore.amp.DynamicLossScaler(scale_value, scale_factor, scale_window)[source]
Dynamic Loss scale class.
Dynamic loss scaling tries to determine the largest loss scale value that will keep gradients finite. It does this by increasing the loss scale every scale_window steps by factor if the grads remain finite, otherwise it reduces the loss scale by 1 / factor and resets the counter.
Note
This is an experimental interface that is subject to change or deletion.
- Parameters
- Supported Platforms:
Ascend
GPU
Examples
>>> loss_scaler = amp.DynamicLossScaler(scale_value=2**10) >>> grads = (Tensor(np.array([np.log(-1), 1.0]), mindspore.float16), ... Tensor(np.array([0.2]), mindspore.float16)) >>> unscaled_grads = loss_scaler.unscale(grads) >>> grads_finite = amp.all_finite(unscaled_grads) >>> loss_scaler.adjust(grads_finite) >>> print(loss_scaler.scale_value.asnumpy()) 512.0
- adjust(grads_finite)[source]
Adjust the scale_value dependent on whether grads are finite.
- Parameters
grads_finite (Tensor) – a scalar bool Tensor indicating whether the grads are finite.