mindspore.nn.FakeQuantWithMinMaxObserver

class mindspore.nn.FakeQuantWithMinMaxObserver(min_init=- 6, max_init=6, ema=False, ema_decay=0.999, per_channel=False, channel_axis=1, num_channels=1, quant_dtype=QuantDtype.INT8, symmetric=False, narrow_range=False, quant_delay=0, neg_trunc=False, mode='DEFAULT')[source]

Quantization aware operation which provides the fake quantization observer function on data with min and max.

The detail of the quantization mode DEFAULT is described as below:

The running min/max xmin and xmax are computed as:

xmin={min(min(X),0) if ema=Falsemin((1c)min(X)+xmin,0) if otherwisexmax={max(max(X),0) if ema=Falsemax((1c)max(X)+xmax,0) if otherwise

where X is the input tensor, and c is the ema_decay.

The scale and zero point zp is computed as:

scale={xmaxxminQmaxQmin if symmetric=False2max(xmax,|xmin|)QmaxQmin if otherwisezp_min=Qminxminscalezp=min(Qmax,max(Qmin,zp_min))+0.5

where Qmax and Qmin is decided by quant_dtype, for example, if quant_dtype=INT8, then Qmax=127 and Qmin=128.

The fake quant output is computed as:

umin=(Qminzp)scaleumax=(Qmaxzp)scaleuX=min(umax,max(umin,X))uminscale+0.5output=uXscale+umin

The detail of the quantization mode LEARNED_SCALE is described as below:

The fake quant output is computed as:

X¯={clip(Xmaxq,0,1)ifneg_truncclip(Xmaxq,1,1) ifotherwiseoutput=floor(X¯Qmax+0.5)scaleQmax

where X is the input tensor. where Qmax (quant_max) is decided by quant_dtype and neg_trunc, for example, if quant_dtype=INT8 and neg_trunc works, Qmax=256 , otherwise math:Q_{max} = 127.

The maxq is updated by training, and its gradient is calculated as follows:

 output maxq={Xmaxq+Xmaxqifboundlower<Xmaxq<11ifXmaxqboundlower1ifXmaxq1boundlower={0ifneg_trunc1ifotherwise

Then minq is computed as:

minq={0ifneg_truncmaxqifotherwise

When exporting, the scale and zero point zp is computed as:

scale=maxqquant_max,zp=0

zp is equal to 0 consistently, due to the LEARNED_SCALE`s symmetric nature.

Parameters
  • min_init (int, float, list) – The initialized min value. Default: -6.

  • max_init (int, float, list) – The initialized max value. Default: 6.

  • ema (bool) – The exponential Moving Average algorithm updates min and max. Default: False.

  • ema_decay (float) – Exponential Moving Average algorithm parameter. Default: 0.999.

  • per_channel (bool) – Quantization granularity based on layer or on channel. Default: False.

  • channel_axis (int) – Quantization by channel axis. Default: 1.

  • num_channels (int) – declarate the min and max channel size, Default: 1.

  • quant_dtype (QuantDtype) – The datatype of quantization, supporting 4 and 8bits. Default: QuantDtype.INT8.

  • symmetric (bool) – Whether the quantization algorithm is symmetric or not. Default: False.

  • narrow_range (bool) – Whether the quantization algorithm uses narrow range or not. Default: False.

  • quant_delay (int) – Quantization delay parameters according to the global step. Default: 0.

  • neg_trunc (bool) – Whether the quantization algorithm uses negative truncation or not. Default: False.

  • mode (str) – Optional quantization mode, currently only DEFAULT`(QAT) and `LEARNED_SCALE are supported. Default: (“DEFAULT”)

Inputs:
  • x (Tensor) - The input of FakeQuantWithMinMaxObserver. The input dimension is preferably 2D or 4D.

Outputs:

Tensor, with the same type and shape as the x.

Raises
  • TypeError – If min_init or max_init is not int, float or list.

  • TypeError – If quant_delay is not an int.

  • ValueError – If quant_delay is less than 0.

  • ValueError – If min_init is not less than max_init.

  • ValueError – If mode is neither DEFAULT nor LEARNED_SCALE.

  • ValueError – If mode is LEARNED_SCALE and symmetric is not True.

  • ValueError – If mode is LEARNED_SCALE, and narrow_range is not True unless when neg_trunc is True.

Supported Platforms:

Ascend GPU

Examples

>>> import mindspore
>>> from mindspore import Tensor
>>> fake_quant = nn.FakeQuantWithMinMaxObserver()
>>> x = Tensor(np.array([[1, 2, 1], [-2, 0, -1]]), mindspore.float32)
>>> result = fake_quant(x)
>>> print(result)
[[ 0.9882355  1.9764705  0.9882355]
 [-1.9764705  0.        -0.9882355]]
extend_repr()[source]

Display instance object as string.

reset(quant_dtype=QuantDtype.INT8, min_init=-6, max_init=6)[source]

Reset the quant max parameter (eg. 256) and the initial value of the minq parameter and maxq parameter, this function is currently only valid for LEARNED_SCALE mode.