mindspore.ops.LRN
- class mindspore.ops.LRN(depth_radius=5, bias=1.0, alpha=1.0, beta=0.5, norm_region='ACROSS_CHANNELS')[source]
Local Response Normalization.
Warning
LRN is deprecated on Ascend due to potential accuracy problem. It’s recommended to use other normalization methods, e.g.
mindspore.ops.BatchNorm
.\[b_{c} = a_{c}\left(k + \frac{\alpha}{n} \sum_{c'=\max(0, c-n/2)}^{\min(N-1,c+n/2)}a_{c'}^2\right)^{-\beta}\]where the \(a_{c}\) indicates the specific value of the pixel corresponding to \(c\) in feature map; where the \(n/2\) indicates the depth_radius; where the \(k\) indicates the bias; where the \(\alpha\) indicates the alpha; where the \(\beta\) indicates the beta.
- Parameters
depth_radius (int) – Half-width of the 1-D normalization window with the shape of 0-D. Default:
5
.bias (float) – An offset (usually positive to avoid dividing by 0). Default:
1.0
.alpha (float) – A scale factor, usually positive. Default:
1.0
.beta (float) – An exponent. Default:
0.5
.norm_region (str) – Specifies normalization region. Options:
"ACROSS_CHANNELS"
. Default:"ACROSS_CHANNELS"
.
- Inputs:
x (Tensor) - A 4-D Tensor with float16 or float32 data type.
- Outputs:
Tensor, with the same shape and data type as x.
- Raises
- Supported Platforms:
GPU
CPU
Examples
>>> import mindspore >>> import numpy as np >>> from mindspore import Tensor, ops >>> x = Tensor(np.array([[[[0.1], [0.2]], ... [[0.3], [0.4]]]]), mindspore.float32) >>> lrn = ops.LRN() >>> output = lrn(x) >>> print(output) [[[[0.09534626] [0.1825742 ]] [[0.2860388 ] [0.3651484 ]]]]