mindspore.ops.ApplyRMSProp

class mindspore.ops.ApplyRMSProp(use_locking=False)[source]

Optimizer that implements the Root Mean Square prop(RMSProp) algorithm. Please refer to the usage in source code of mindspore.nn.RMSProp.

The updating formulas of ApplyRMSProp algorithm are as follows,

\[\begin{split}\begin{array}{ll} \\ s_{t+1} = \rho s_{t} + (1 - \rho)(\nabla Q_{i}(w))^2 \\ m_{t+1} = \beta m_{t} + \frac{\eta} {\sqrt{s_{t+1} + \epsilon}} \nabla Q_{i}(w) \\ w = w - m_{t+1} \end{array}\end{split}\]

where \(w\) represents var, which will be updated. \(s_{t+1}\) represents mean_square, \(s_{t}\) is the last moment of \(s_{t+1}\), \(m_{t+1}\) represents moment, \(m_{t}\) is the last moment of \(m_{t+1}\). \(\rho\) represents decay. \(\beta\) is the momentum term, represents momentum. \(\epsilon\) is a smoothing term to avoid division by zero, represents epsilon. \(\eta\) represents learning_rate. \(\nabla Q_{i}(w)\) represents grad.

Warning

Note that in dense implementation of this algorithm, “mean_square” and “moment” will update even if “grad” is 0, but in this sparse implementation, “mean_square” and “moment” will not update in iterations during which “grad” is 0.

Parameters

use_locking (bool) – Whether to enable a lock to protect the variable and accumulation tensors from being updated. Default: False .

Inputs:
  • var (Parameter) - Weights to be updated.

  • mean_square (Tensor) - Mean square gradients, must be the same type as var.

  • moment (Tensor) - Delta of var, must be the same type as var.

  • learning_rate (Union[Number, Tensor]) - Learning rate. Must be a float number or a scalar tensor with float16 or float32 data type.

  • grad (Tensor) - Gradient, must be the same type as var.

  • decay (float) - Decay rate. Only constant value is allowed.

  • momentum (float) - Momentum. Only constant value is allowed.

  • epsilon (float) - Ridge term. Only constant value is allowed.

Outputs:

Tensor, parameters to be updated.

Raises
  • TypeError – If use_locking is not a bool.

  • TypeError – If var, mean_square, moment or decay is not a Tensor.

  • TypeError – If learning_rate is neither a Number nor a Tensor.

  • TypeError – If dtype of decay, momentum or epsilon is not float.

  • TypeError – If dtype of learning_rate is neither float16 nor float32.

  • ValueError – If decay, momentum or epsilon is not a constant value.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor, nn, ops, Parameter
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.apply_rms_prop = ops.ApplyRMSProp()
...         self.var = Parameter(Tensor(np.ones([2, 2]).astype(np.float32)), name="var")
...
...     def construct(self, mean_square, moment, grad, decay, momentum, epsilon, lr):
...         out = self.apply_rms_prop(self.var, mean_square, moment, lr, grad, decay, momentum, epsilon)
...         return out
...
>>> net = Net()
>>> mean_square = Tensor(np.ones([2, 2]).astype(np.float32))
>>> moment = Tensor(np.ones([2, 2]).astype(np.float32))
>>> grad = Tensor(np.ones([2, 2]).astype(np.float32))
>>> output = net(mean_square, moment, grad, 0.0, 1e-10, 0.001, 0.01)
>>> print(net.var.asnumpy())
[[0.990005  0.990005]
 [0.990005  0.990005]]