mindspore.nn.GRUCell

class mindspore.nn.GRUCell(input_size: int, hidden_size: int, has_bias: bool = True)[source]

A GRU(Gated Recurrent Unit) cell.

\[\begin{split}\begin{array}{ll} r = \sigma(W_{ir} x + b_{ir} + W_{hr} h + b_{hr}) \\ z = \sigma(W_{iz} x + b_{iz} + W_{hz} h + b_{hz}) \\ n = \tanh(W_{in} x + b_{in} + r * (W_{hn} h + b_{hn})) \\ h' = (1 - z) * n + z * h \end{array}\end{split}\]

Here \(\sigma\) is the sigmoid function, and \(*\) is the Hadamard product. \(W, b\) are learnable weights between the output and the input in the formula. For instance, \(W_{ir}, b_{ir}\) are the weight and bias used to transform from input \(x\) to \(r\). Details can be found in paper Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation.

The LSTMCell can be simplified in NN layer, the following formula:

\[h^{'},c^{'} = LSTMCell(x, (h_0, c_0))\]
Parameters
  • input_size (int) – Number of features of input.

  • hidden_size (int) – Number of features of hidden layer.

  • has_bias (bool) – Whether the cell has bias b_in and b_hn. Default: True.

Inputs:
  • x (Tensor) - Tensor of shape (batch_size, input_size).

  • hx (Tensor) - Tensor of data type mindspore.float32 and shape (batch_size, hidden_size). Data type of hx must be the same as x.

Outputs:
  • hx’ (Tensor) - Tensor of shape (batch_size, hidden_size).

Raises
  • TypeError – If input_size, hidden_size is not an int.

  • TypeError – If has_bias is not a bool.

Supported Platforms:

Ascend GPU CPU

Examples

>>> net = nn.GRUCell(10, 16)
>>> x = Tensor(np.ones([5, 3, 10]).astype(np.float32))
>>> hx = Tensor(np.ones([3, 16]).astype(np.float32))
>>> output = []
>>> for i in range(5):
...     hx = net(x[i], hx)
...     output.append(hx)
>>> print(output[0].shape)
(3, 16)