mindspore.nn.TransformerDecoderLayer

class mindspore.nn.TransformerDecoderLayer(d_model: int, nhead: int, dim_feedforward: int = 2048, dropout: float = 0.1, activation: Union[str, Cell, callable] = 'relu', layer_norm_eps: float = 1e-05, batch_first: bool = False, norm_first: bool = False)[source]

Transformer Decoder Layer. This is an implementation of the single layer of the transformer decoder layer, including self-attention, cross attention and feedward layer.

Warning

This is an experimental API that is subject to change or deletion.

Parameters
  • d_model (int) – The number of expected features in the input tensor.

  • nhead (int) – The number of heads in the MultiheadAttention modules.

  • dim_feedforward (int) – The dimension of the feedforward layer. Default: 2048.

  • dropout (float) – The dropout value. Default: 0.1.

  • activation (Union[str, callable, Cell]) – The activation function of the intermediate layer, can be a string (“relu” or “gelu”), Cell instance (nn.ReLU() or nn.GELU()) or a callable (ops.relu or ops.gelu). Default: "relu"

  • layer_norm_eps (float) – The epsilon value in LayerNorm modules. Default: 1e-5.

  • batch_first (bool) – If batch_first = True, then the shape of input and output tensors is \((batch, seq, feature)\) , otherwise the shape is \((seq, batch, feature)\). Default: False.

  • norm_first (bool) – If norm_first = True, layer norm is done prior to attention and feedforward operations, respectively. Default: False.

Inputs:
  • tgt (Tensor): The sequence to the decoder layer.

  • memory (Tensor): The sequence from the last layer of the encoder.

  • tgt_mask (Tensor, optional): The mask of the tgt sequence. Default: None.

  • memory_mask (Tensor, optional): The mask of the memory sequence. Default: None.

  • tgt_key_padding_mask (Tensor, optional): The mask of the tgt keys per batch. Default: None.

  • memory_key_padding_mask (Tensor, optional): The mask of the memory keys per batch. Default: None.

Outputs:

Tensor.

Raises
Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import numpy as np
>>> decoder_layer = ms.nn.TransformerDecoderLayer(d_model=512, nhead=8)
>>> memory = ms.Tensor(np.random.rand(10, 32, 512), ms.float32)
>>> tgt = ms.Tensor(np.random.rand(20, 32, 512), ms.float32)
>>> out = decoder_layer(tgt, memory)
>>> # Alternatively, when `batch_first` is ``True``:
>>> decoder_layer = ms.nn.TransformerDecoderLayer(d_model=512, nhead=8, batch_first=True)
>>> memory = ms.Tensor(np.random.rand(32, 10, 512), ms.float32)
>>> tgt = ms.Tensor(np.random.rand(32, 20, 512), ms.float32)
>>> out = decoder_layer(tgt, memory)
>>> print(out.shape)
(32, 20, 512)