mindspore.ops.ApplyFtrl
- class mindspore.ops.ApplyFtrl(use_locking=False)[source]
Updates relevant entries according to the FTRL scheme.
For more details, please refer to
mindspore.nn.FTRL
.- Parameters
use_locking (bool) – Use locks for updating operation if true . Default: False.
- Inputs:
var (Parameter) - The variable to be updated. The data type must be float16 or float32. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.
accum (Parameter) - The accumulation to be updated, must be same shape and data type as var.
linear (Parameter) - The linear coefficient to be updated, must be same shape and data type as var.
grad (Tensor) - Gradient. The data type must be float16 or float32.
lr (Union[Number, Tensor]) - The learning rate value, must be positive. Default: 0.001. It must be a float number or a scalar tensor with float16 or float32 data type.
l1 (Union[Number, Tensor]) - l1 regularization strength, must be greater than or equal to zero. Default: 0.0. It must be a float number or a scalar tensor with float16 or float32 data type.
l2 (Union[Number, Tensor]) - l2 regularization strength, must be greater than or equal to zero. Default: 0.0. It must be a float number or a scalar tensor with float16 or float32 data type.
lr_power (Union[Number, Tensor]) - Learning rate power controls how the learning rate decreases during training, must be less than or equal to zero. Use fixed learning rate if lr_power is zero. Default: -0.5. It must be a float number or a scalar tensor with float16 or float32 data type.
- Outputs:
var (Tensor) - Represents the updated var. As the input parameters has been updated in-place, this value is always zero when the platform is GPU.
- Raises
- Supported Platforms:
Ascend
GPU
Examples
>>> class ApplyFtrlNet(nn.Cell): ... def __init__(self): ... super(ApplyFtrlNet, self).__init__() ... self.apply_ftrl = ops.ApplyFtrl() ... self.lr = 0.001 ... self.l1 = 0.0 ... self.l2 = 0.0 ... self.lr_power = -0.5 ... self.var = Parameter(Tensor(np.array([[0.6, 0.4], ... [0.1, 0.5]]).astype(np.float32)), name="var") ... self.accum = Parameter(Tensor(np.array([[0.6, 0.5], ... [0.2, 0.6]]).astype(np.float32)), name="accum") ... self.linear = Parameter(Tensor(np.array([[0.9, 0.1], ... [0.7, 0.8]]).astype(np.float32)), name="linear") ... ... def construct(self, grad): ... out = self.apply_ftrl(self.var, self.accum, self.linear, grad, self.lr, self.l1, self.l2, ... self.lr_power) ... return out ... >>> net = ApplyFtrlNet() >>> input_x = Tensor(np.array([[0.3, 0.7], [0.1, 0.8]]).astype(np.float32)) >>> output = net(input_x) >>> print(net.var.asnumpy()) [[ 0.0390525 0.11492836] [ 0.00066425 0.15075898]]