mindspore.ops.ApplyMomentum
- class mindspore.ops.ApplyMomentum(use_nesterov=False, use_locking=False, gradient_scale=1.0)[source]
Optimizer that implements the Momentum algorithm.
Refer to the paper On the importance of initialization and momentum in deep learning for more details.
Refer to
mindspore.nn.Momentum
for more details about the formula and usage.Inputs of variable, accumulation and gradient comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. Data type conversion of Parameter is not supported. RuntimeError exception will be thrown.
- Parameters
- Inputs:
variable (Parameter) - Weights to be updated. data type must be float.
accumulation (Parameter) - Accumulated gradient value by moment weight. Has the same data type with variable.
learning_rate (Union[Number, Tensor]) - The learning rate value, must be a float number or a scalar tensor with float data type.
gradient (Tensor) - Gradient, has the same data type as variable.
momentum (Union[Number, Tensor]) - Momentum, must be a float number or a scalar tensor with float data type.
- Outputs:
Tensor, parameters to be updated.
- Raises
TypeError – If the use_locking or use_nesterov is not a bool or gradient_scale is not a float.
- Supported Platforms:
Ascend
GPU
CPU
Examples
Please refer to the usage in
mindspore.nn.Momentum
.