mindspore.nn.ActQuant
- class mindspore.nn.ActQuant(activation, ema=False, ema_decay=0.999, fake_before=False, quant_config=quant_config_default, quant_dtype=QuantDtype.INT8)[source]
Quantization aware training activation function.
Add the fake quantized operation to the end of activation operation, by which the output of activation operation will be truncated. For more detials about Quantilization, please refer to the implementation of subclass of class:_Observer, for example, class:mindspore.nn.FakeQuantWithMinMaxObserver.
- Parameters
activation (Cell) – Activation cell.
ema (bool) – The exponential Moving Average algorithm updates min and max. Default: False.
ema_decay (float) – Exponential Moving Average algorithm parameter. Default: 0.999.
fake_before (bool) – Whether add fake quantized operation before activation. Default: False.
quant_config (QuantConfig) – Configures the types of quant observer and quant settings of weight and activation. Note that, QuantConfig is a special namedtuple, which is designed for quantization and can be generated by
mindspore.compression.quant.create_quant_config()
method. Default: QuantConfig with both items set to defaultFakeQuantWithMinMaxObserver
.quant_dtype (QuantDtype) – Specifies the FakeQuant datatype. Default: QuantDtype.INT8.
- Inputs:
x (Tensor) - The input of ActQuant. The input dimension is preferably 2D or 4D.
- Outputs:
Tensor, with the same type and shape as the x.
- Raises
- Supported Platforms:
Ascend
GPU
Examples
>>> import mindspore >>> from mindspore.compression import quant >>> from mindspore import Tensor >>> qconfig = quant.create_quant_config() >>> act_quant = nn.ActQuant(nn.ReLU(), quant_config=qconfig) >>> x = Tensor(np.array([[1, 2, -1], [-2, 0, -1]]), mindspore.float32) >>> result = act_quant(x) >>> print(result) [[0.9882355 1.9764705 0. ] [0. 0. 0. ]]