mindspore.nn.LSTM
- class mindspore.nn.LSTM(*args, **kwargs)[source]
Stacked LSTM (Long Short-Term Memory) layers.
Apply LSTM layer to the input.
There are two pipelines connecting two consecutive cells in a LSTM model; one is cell state pipeline and the other is hidden state pipeline. Denote two consecutive time nodes as \(t-1\) and \(t\). Given an input \(x_t\) at time \(t\), an hidden state \(h_{t-1}\) and an cell state \(c_{t-1}\) of the layer at time \({t-1}\), the cell state and hidden state at time \(t\) is computed using an gating mechanism. Input gate \(i_t\) is designed to protect the cell from perturbation by irrelevant inputs. Forget gate \(f_t\) affords protection of the cell by forgetting some information in the past, which is stored in \(h_{t-1}\). Output gate \(o_t\) protects other units from perturbation by currently irrelevant memory contents. Candidate cell state \(\tilde{c}_t\) is calculated with the current input, on which the input gate will be applied. Finally, current cell state \(c_{t}\) and hidden state \(h_{t}\) are computed with the calculated gates and cell states. The complete formulation is as follows.
\[\begin{split}\begin{array}{ll} \\ i_t = \sigma(W_{ix} x_t + b_{ix} + W_{ih} h_{(t-1)} + b_{ih}) \\ f_t = \sigma(W_{fx} x_t + b_{fx} + W_{fh} h_{(t-1)} + b_{fh}) \\ \tilde{c}_t = \tanh(W_{cx} x_t + b_{cx} + W_{ch} h_{(t-1)} + b_{ch}) \\ o_t = \sigma(W_{ox} x_t + b_{ox} + W_{oh} h_{(t-1)} + b_{oh}) \\ c_t = f_t * c_{(t-1)} + i_t * \tilde{c}_t \\ h_t = o_t * \tanh(c_t) \\ \end{array}\end{split}\]Here \(\sigma\) is the sigmoid function, and \(*\) is the Hadamard product. \(W, b\) are learnable weights between the output and the input in the formula. For instance, \(W_{ix}, b_{ix}\) are the weight and bias used to transform from input \(x\) to \(i\). Details can be found in paper LONG SHORT-TERM MEMORY and Long Short-Term Memory Recurrent Neural Network Architectures for Large Scale Acoustic Modeling.
LSTM hides the cycle of the whole cyclic neural network on the time step of the sequence, and input the sequence and initial state to obtain the matrix spliced by the hidden state of each time step and the hidden state of the last time step. We use the hidden state of the last time step as the coding feature of the input sentence and output it to the next layer.
\[h_{0:n},(h_{n}, c_{n}) = LSTM(x_{0:n},(h_{0},c_{0}))\]- Parameters
input_size (int) – Number of features of input.
hidden_size (int) – Number of features of hidden layer.
num_layers (int) – Number of layers of stacked LSTM . Default:
1
.has_bias (bool) – Whether the cell has bias \(b_{ih}\) and \(b_{hh}\). Default:
True
.batch_first (bool) – Specifies whether the first dimension of input x is batch_size. Default:
False
.dropout (float, int) – If not 0, append Dropout layer on the outputs of each LSTM layer except the last layer. Default
0
. The range of dropout is [0.0, 1.0).bidirectional (bool) – Specifies whether it is a bidirectional LSTM, num_directions=2 if bidirectional=True otherwise 1. Default:
False
.dtype (
mindspore.dtype
) – Dtype of Parameters. Default:mstype.float32
.
- Inputs:
x (Tensor) - Tensor of data type mindspore.float32 or mindspore.float16 and shape \((seq\_len, batch\_size, input\_size)\) or \((batch\_size, seq\_len, input\_size)\) .
hx (tuple) - A tuple of two Tensors (h_0, c_0) both of data type mindspore.float32 or mindspore.float16 and shape \((num\_directions * num\_layers, batch\_size, hidden\_size)\) .
seq_length (Tensor) - The length of each sequence in an input batch. Tensor of shape \((batch\_size)\). Default:
None
. This input indicates the real sequence length before padding to avoid padded elements have been used to compute hidden state and affect the final output. It is recommended to use this input when x has padding elements.
- Outputs:
Tuple, a tuple contains (output, (h_n, c_n)).
output (Tensor) - Tensor of shape \((seq\_len, batch\_size, num\_directions * hidden\_size)\) .
hx_n (tuple) - A tuple of two Tensor (h_n, c_n) both of shape \((num\_directions * num\_layers, batch\_size, hidden\_size)\) .
- Raises
TypeError – If input_size, hidden_size or num_layers is not an int.
TypeError – If has_bias, batch_first or bidirectional is not a bool.
TypeError – If dropout is not a float.
ValueError – If dropout is not in range [0.0, 1.0).
- Supported Platforms:
Ascend
GPU
CPU
Examples
>>> import mindspore as ms >>> import numpy as np >>> net = ms.nn.LSTM(10, 16, 2, has_bias=True, batch_first=True, bidirectional=False) >>> x = ms.Tensor(np.ones([3, 5, 10]).astype(np.float32)) >>> h0 = ms.Tensor(np.ones([1 * 2, 3, 16]).astype(np.float32)) >>> c0 = ms.Tensor(np.ones([1 * 2, 3, 16]).astype(np.float32)) >>> output, (hn, cn) = net(x, (h0, c0)) >>> print(output.shape) (3, 5, 16)