mindspore.ops.ApplyFtrl

View Source On Gitee
class mindspore.ops.ApplyFtrl(use_locking=False)[source]

Updates relevant entries according to the FTRL scheme.

For more details, please refer to mindspore.nn.FTRL.

Note

  • Currently, only positive numbers are supported on the Ascend platform, and the calculation results for other scenarios are not defined.

  • Inputs of var, accum, linear and grad comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Parameters

use_locking (bool) – Use locks for updating operation if True . Default: False .

Inputs:
  • var (Union[Parameter, Tensor]) - The variable to be updated. The data type must be float16 or float32. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • accum (Union[Parameter, Tensor]) - The accumulation to be updated, must be same shape as var.

  • linear (Union[Parameter, Tensor]) - The linear coefficient to be updated, must be same shape as var.

  • grad (Tensor) - Gradient. The data type must be float16 or float32.

  • lr (Union[Number, Tensor]) - The learning rate value, must be positive. Default: 0.001 . It must be a float number or a scalar tensor with float16 or float32 data type.

  • l1 (Union[Number, Tensor]) - l1 regularization strength, must be greater than or equal to zero. Default: 0.0 . It must be a float number or a scalar tensor with float16 or float32 data type.

  • l2 (Union[Number, Tensor]) - l2 regularization strength, must be greater than or equal to zero. Default: 0.0 . It must be a float number or a scalar tensor with float16 or float32 data type.

  • lr_power (Union[Number, Tensor]) - Learning rate power controls how the learning rate decreases during training, must be less than or equal to zero. Use fixed learning rate if lr_power is zero. Default: -0.5 . It must be a float number or a scalar tensor with float16 or float32 data type.

Outputs:
  • var (Tensor) - Represents the updated var. As the input parameters or tensors has been updated in-place, this value is always zero when the platform is GPU.

Raises
  • TypeError – If use_locking is not a bool.

  • TypeError – If dtype of var, grad, lr, l1, l2 or lr_power is neither float16 nor float32.

  • TypeError – If lr, l1, l2 or lr_power is neither a Number nor a Tensor.

  • TypeError – If grad is not a Tensor.

  • TypeError – If the parameter or tensor types of var, accum and linear are inconsistent.

  • TypeError – If the parameter or tensor types of grad, lr, l1, l2, lr_power are inconsistent with var and the precision is greater than var.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor, nn, ops, Parameter
>>> class ApplyFtrlNet(nn.Cell):
...     def __init__(self):
...         super(ApplyFtrlNet, self).__init__()
...         self.apply_ftrl = ops.ApplyFtrl()
...         self.lr = 0.001
...         self.l1 = 0.0
...         self.l2 = 0.0
...         self.lr_power = -0.5
...         self.var = Parameter(Tensor(np.array([[0.6, 0.4],
...                                               [0.1, 0.5]]).astype(np.float32)), name="var")
...         self.accum = Parameter(Tensor(np.array([[0.6, 0.5],
...                                                 [0.2, 0.6]]).astype(np.float32)), name="accum")
...         self.linear = Parameter(Tensor(np.array([[0.9, 0.1],
...                                                  [0.7, 0.8]]).astype(np.float32)), name="linear")
...
...     def construct(self, grad):
...         out = self.apply_ftrl(self.var, self.accum, self.linear, grad, self.lr, self.l1, self.l2,
...                               self.lr_power)
...         return out
...
>>> net = ApplyFtrlNet()
>>> input_x = Tensor(np.array([[0.3, 0.7], [0.1, 0.8]]).astype(np.float32))
>>> output = net(input_x)
>>> print(net.var.asnumpy())
[[ 0.0390525  0.11492836]
 [ 0.00066425 0.15075898]]