mindspore.nn.LSTMCell

class mindspore.nn.LSTMCell(input_size: int, hidden_size: int, has_bias: bool = True)[source]

A LSTM (Long Short-Term Memory) cell.

it=σ(Wixxt+bix+Wihh(t1)+bih)ft=σ(Wfxxt+bfx+Wfhh(t1)+bfh)c~t=tanh(Wcxxt+bcx+Wchh(t1)+bch)ot=σ(Woxxt+box+Wohh(t1)+boh)ct=ftc(t1)+itc~tht=ottanh(ct)

Here σ is the sigmoid function, and is the Hadamard product. W,b are learnable weights between the output and the input in the formula. For instance, Wix,bix are the weight and bias used to transform from input x to i. Details can be found in paper LONG SHORT-TERM MEMORY and Long Short-Term Memory Recurrent Neural Network Architectures for Large Scale Acoustic Modeling.

The encapsulated LSTMCell can be simplified to the following formula:

h,c=LSTMCell(x,(h0,c0))
Parameters
  • input_size (int) – Number of features of input.

  • hidden_size (int) – Number of features of hidden layer.

  • has_bias (bool) – Whether the cell has bias b_ih and b_hh. Default: True.

Inputs:
  • x (Tensor) - Tensor of shape (batch_size,input_size).

  • hx (tuple) - A tuple of two Tensors (h_0, c_0) both of data type mindspore.float32 and shape (batch_size,hidden_size). The data type of hx must be the same as x.

Outputs:
  • hx’ (Tensor) - A tuple of two Tensors (h’, c’) both of data shape (batch_size,hidden_size).

Raises
  • TypeError – If input_size, hidden_size is not an int.

  • TypeError – If has_bias is not a bool.

Supported Platforms:

Ascend GPU CPU

Examples

>>> net = nn.LSTMCell(10, 16)
>>> x = Tensor(np.ones([5, 3, 10]).astype(np.float32))
>>> h = Tensor(np.ones([3, 16]).astype(np.float32))
>>> c = Tensor(np.ones([3, 16]).astype(np.float32))
>>> output = []
>>> for i in range(5):
...     hx = net(x[i], (h, c))
...     output.append(hx)
>>> print(output[0][0].shape)
(3, 16)