mindspore.mint.nn.LayerNorm

View Source On Gitee
class mindspore.mint.nn.LayerNorm(normalized_shape, eps=1e-5, elementwise_affine=True, bias=True, dtype=None)[source]

Applies Layer Normalization over a mini-batch of inputs.

Layer Normalization is widely used in recurrent neural networks. It applies normalization on a mini-batch of inputs for each single training case as described in the paper Layer Normalization.

Unlike Batch Normalization, Layer Normalization performs exactly the same computation at training and testing time. It is applied across all channels and pixel but only one batch size. \(\gamma\) is the scale value learned through training and \(\beta\) is the shift value. It can be described using the following formula:

\[y = \frac{x - \mathrm{E}[x]}{\sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta\]

Warning

This is an experimental API that is subject to change or deletion.

Parameters
  • normalized_shape (Union(tuple[int], list[int], int)) – The normalized shape of x for LayerNorm

  • eps (float) – A value added to the denominator for numerical stability(\(\epsilon\)). Default: 1e-5 .

  • elementwise_affine (bool) – Whether affine transformation is required. When this parameter is set to True, the weight parameter is initialized to 1 and the offset is initialized to 0. Default: True.

  • bias (bool) – If set to False, the layer will not learn an additive bias (only relevant if elementwise_affine is True). Default: True.

  • dtype (mindspore.dtype) – Dtype of Parameters. Default: None .

Inputs:
  • x (Tensor) - The shape is \((N, *)\), where \(*\) is equal to normalized_shape.

Outputs:

Tensor, the normalized and scaled offset tensor, has the same shape and data type as the x.

Raises

TypeError – If eps is not a float.

Supported Platforms:

Ascend

Examples

>>> import mindspore as ms
>>> import numpy as np
>>> x = ms.Tensor(np.ones([20, 5, 10, 10]), ms.float32)
>>> shape1 = x.shape[1:]
>>> m = ms.mint.nn.LayerNorm(shape1)
>>> output = m(x).shape
>>> print(output)
(20, 5, 10, 10)