mindspore.nn.TransformerEncoder

class mindspore.nn.TransformerEncoder(encoder_layer, num_layers, norm=None)[source]

Transformer Encoder module with multi-layer stacked of TransformerEncoderLayer, including multihead self attention and feedforward layer. Users can build the BERT(https://arxiv.org/abs/1810.04805) model with corresponding parameters.

Warning

This is an experimental API that is subject to change or deletion.

Parameters
  • encoder_layer (Cell) – An instance of the TransformerEncoderLayer() class.

  • num_layers (int) – The number of encoder-layers in the encoder.

  • norm (Cell, optional) – The layer normalization module. Default: None.

Inputs:
  • src (Tensor): The sequence to the encoder.

  • src_mask (Tensor, optional): The mask of the src sequence. Default: None.

  • src_key_padding_mask (Tensor, optional): the mask of the src keys per batch . Default: None.

Outputs:

Tensor.

Raises

AssertionError – If the input argument src_key_padding_mask is not bool or floating types.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import numpy as np
>>> encoder_layer = ms.nn.TransformerEncoderLayer(d_model=512, nhead=8)
>>> transformer_encoder = ms.nn.TransformerEncoder(encoder_layer, num_layers=6)
>>> src = ms.Tensor(np.random.rand(10, 32, 512), ms.float32)
>>> out = transformer_encoder(src)
>>> print(out.shape)
(10, 32, 512)