mindspore.amp.StaticLossScaler
- class mindspore.amp.StaticLossScaler(scale_value)[source]
Static Loss scale class.
Scales and unscales loss or gradients by a fixed constant.
Warning
This is an experimental API that is subject to change or deletion.
- Supported Platforms:
Ascend
GPU
CPU
Examples
>>> import mindspore >>> from mindspore import amp, Tensor >>> import numpy as np >>> loss_scaler = amp.StaticLossScaler(scale_value=2**10) >>> loss_value = Tensor([1.], mindspore.float32) >>> scaled_loss_value = loss_scaler.scale(loss_value) >>> print(scaled_loss_value) [1024.] >>> grads = (Tensor(np.array([1.5, 1.0]), mindspore.float16), ... Tensor(np.array([1.2]), mindspore.float16)) >>> unscaled_grads = loss_scaler.unscale(grads) >>> print(unscaled_grads) (Tensor(shape=[2], dtype=Float16, value= [ 1.4648e-03, 9.7656e-04]), Tensor(shape=[1], dtype=Float16, value= [ 1.1721e-03]))
- adjust(grads_finite)[source]
Adjust scale_value in LossScaler. scale_value is fixed in StaticLossScaler, so this method return False directly.
- Parameters
grads_finite (Tensor) – a scalar bool Tensor indicating whether the grads are finite.