mindspore.nn.TransformerDecoderLayer
- class mindspore.nn.TransformerDecoderLayer(d_model: int, nhead: int, dim_feedforward: int = 2048, dropout: float = 0.1, activation: Union[str, Cell, callable] = 'relu', layer_norm_eps: float = 1e-05, batch_first: bool = False, norm_first: bool = False, dtype=mstype.float32)[source]
Transformer Decoder Layer. This is an implementation of the single layer of the transformer decoder layer, including self-attention, cross attention and feedward layer.
- Parameters
d_model (int) – The number of expected features in the input tensor.
nhead (int) – The number of heads in the MultiheadAttention modules.
dim_feedforward (int) – The dimension of the feedforward layer. Default:
2048
.dropout (float) – The dropout value. Default:
0.1
.activation (Union[str, callable, Cell]) – The activation function of the intermediate layer, can be a string (
"relu"
or"gelu"
), Cell instance (mindspore.nn.ReLU
ormindspore.nn.GELU
) or a callable (mindspore.ops.relu()
ormindspore.ops.gelu()
). Default:"relu"
.layer_norm_eps (float) – The epsilon value in LayerNorm modules. Default:
1e-5
.batch_first (bool) – If batch_first=True , then the shape of input and output tensors is \((batch, seq, feature)\) , otherwise the shape is \((seq, batch, feature)\). Default:
False
.norm_first (bool) – If norm_first = True, layer norm is located prior to attention and feedforward operations; if norm_first = False, layer norm is located after the attention and feedforward operations. Default:
False
.dtype (
mindspore.dtype
) – Data type of Parameter. Default:mstype.float32
.
- Inputs:
tgt (Tensor) - The sequence to the decoder layer. For unbatched input, the shape is \((T, E)\) ; otherwise if batch_first=False , the shape is \((T, N, E)\) and if batch_first=True , the shape is \((N, T, E)\), where \((T)\) is the target sequence length. Supported types: float16, float32, float64.
memory (Tensor) - The sequence from the last layer of the encoder. Supported types: float16, float32, float64.
tgt_mask (Tensor, optional) - The mask of the tgt sequence. The shape is \((T, T)\) or \((N*nhead, T, T)\). Supported types: float16, float32, float64, bool. Default:
None
.memory_mask (Tensor, optional) - The mask of the memory sequence. The shape is \((T, S)\) . Supported types: float16, float32, float64, bool. Default:
None
.tgt_key_padding_mask (Tensor, optional): The mask of the tgt keys per batch. The shape is \((T)\) for unbatched input, otherwise \((N, T)\) . Supported types: float16, float32, float64, bool. Default:
None
.memory_key_padding_mask (Tensor, optional) - The mask of the memory keys per batch. The shape is \((S)\) for unbatched input, otherwise \((N, S)\) . Supported types: float16, float32, float64, bool. Default:
None
.
- Outputs:
Tensor. The shape and dtype of Tensor is the same with tgt .
- Raises
ValueError – If the init argument activation is not str, callable or Cell instance.
ValueError – If the init argument activation is not
mindspore.nn.ReLU
,mindspore.nn.GELU
instance,mindspore.ops.relu()
,mindspore.ops.gelu()
, “relu” or “gelu” .
- Supported Platforms:
Ascend
GPU
CPU
Examples
>>> import mindspore as ms >>> import numpy as np >>> decoder_layer = ms.nn.TransformerDecoderLayer(d_model=512, nhead=8) >>> memory = ms.Tensor(np.random.rand(10, 32, 512), ms.float32) >>> tgt = ms.Tensor(np.random.rand(20, 32, 512), ms.float32) >>> out = decoder_layer(tgt, memory) >>> print(out.shape) (20, 32, 512) >>> # Alternatively, when `batch_first` is ``True``: >>> decoder_layer = ms.nn.TransformerDecoderLayer(d_model=512, nhead=8, batch_first=True) >>> memory = ms.Tensor(np.random.rand(32, 10, 512), ms.float32) >>> tgt = ms.Tensor(np.random.rand(32, 20, 512), ms.float32) >>> out = decoder_layer(tgt, memory) >>> print(out.shape) (32, 20, 512)