mindspore.ops.ApplyRMSProp
- class mindspore.ops.ApplyRMSProp(*args, **kwargs)[source]
Optimizer that implements the Root Mean Square prop(RMSProp) algorithm. Please refer to the usage in source code of nn.RMSProp.
The updating formulas of ApplyRMSProp algorithm are as follows,
where
represents var, which will be updated. represents mean_square, is the last momentent of , represents moment, is the last momentent of . represents decay. is the momentum term, represents momentum. is a smoothing term to avoid division by zero, represents epsilon. represents learning_rate. represents grad.- Parameters
use_locking (bool) – Whether to enable a lock to protect the variable and accumlation tensors from being updated. Default: False.
- Inputs:
var (Tensor) - Weights to be update.
mean_square (Tensor) - Mean square gradients, must have the same type as var.
moment (Tensor) - Delta of var, must have the same type as var.
learning_rate (Union[Number, Tensor]) - Learning rate. Must be a float number or a scalar tensor with float16 or float32 data type.
grad (Tensor) - Gradient, must have the same type as var.
decay (float) - Decay rate. Only constant value is allowed.
momentum (float) - Momentum. Only constant value is allowed.
epsilon (float) - Ridge term. Only constant value is allowed.
- Outputs:
Tensor, parameters to be update.
- Raises
TypeError – If use_locking is not a bool.
TypeError – If var, `mean_square, moment or decay is not a Tensor.
TypeError – If learning_rate is neither a Number nor a Tensor.
TypeError – If dtype of decay, momentum or epsilon is not float.
TypeError – If dtype of learning_rate is neither float16 nor float32.
ValueError – If decay, momentum or epsilon is not a constant value.
- Supported Platforms:
Ascend
GPU
CPU
Examples
>>> import numpy as np >>> import mindspore.ops as ops >>> import mindspore.nn as nn >>> from mindspore import Tensor >>> from mindspore import Parameter >>> class Net(nn.Cell): ... def __init__(self): ... super(Net, self).__init__() ... self.apply_rms_prop = ops.ApplyRMSProp() ... self.var = Parameter(Tensor(np.ones([2, 2]).astype(np.float32)), name="var") ... ... def construct(self, mean_square, moment, grad, decay, momentum, epsilon, lr): ... out = self.apply_rms_prop(self.var, mean_square, moment, lr, grad, decay, momentum, epsilon) ... return out ... >>> net = Net() >>> mean_square = Tensor(np.ones([2, 2]).astype(np.float32)) >>> moment = Tensor(np.ones([2, 2]).astype(np.float32)) >>> grad = Tensor(np.ones([2, 2]).astype(np.float32)) >>> output = net(mean_square, moment, grad, 0.0, 1e-10, 0.001, 0.01) >>> print(net.var.asnumpy()) [[0.990005 0.990005] [0.990005 0.990005]]