文档反馈

问题文档片段

问题文档片段包含公式时,显示为空格。

提交类型
issue

有点复杂...

找人问问吧。

请选择提交类型

问题类型
规范和低错类

- 规范和低错类:

- 错别字或拼写错误,标点符号使用错误、公式错误或显示异常。

- 链接错误、空单元格、格式错误。

- 英文中包含中文字符。

- 界面和描述不一致,但不影响操作。

- 表述不通顺,但不影响理解。

- 版本号不匹配:如软件包名称、界面版本号。

易用性

- 易用性:

- 关键步骤错误或缺失,无法指导用户完成任务。

- 缺少主要功能描述、关键词解释、必要前提条件、注意事项等。

- 描述内容存在歧义指代不明、上下文矛盾。

- 逻辑不清晰,该分类、分项、分步骤的没有给出。

正确性

- 正确性:

- 技术原理、功能、支持平台、参数类型、异常报错等描述和软件实现不一致。

- 原理图、架构图等存在错误。

- 命令、命令参数等错误。

- 代码片段错误。

- 命令无法完成对应功能。

- 界面错误,无法指导操作。

- 代码样例运行报错、运行结果不符。

风险提示

- 风险提示:

- 对重要数据或系统存在风险的操作,缺少安全提示。

内容合规

- 内容合规:

- 违反法律法规,涉及政治、领土主权等敏感词。

- 内容侵权。

请选择问题类型

问题描述

点击输入详细问题描述,以帮助我们快速定位问题。

mindspore.nn.FTRL

class mindspore.nn.FTRL(*args, **kwargs)[source]

Implements the FTRL algorithm with ApplyFtrl Operator.

FTRL is an online convex optimization algorithm that adaptively chooses its regularization function based on the loss functions. Refer to paper Adaptive Bound Optimization for Online Convex Optimization. Refer to paper Ad Click Prediction: a View from the Trenches for engineering document.

The updating formulas are as follows,

mt+1=mt+g2ut+1=ut+gmt+1-pmt-pαωtωt+1={(sign(ut+1)l1ut+1)mt+1-pα+2l2 if |ut+1|>l10.0 otherwise 

m represents accumulators, g represents grads, t represents the current step, u represents the linear coefficient to be updated, p represents lr_power, α represents learning_rate, ω represents params.

Note

The sparse strategy is applied while the SparseGatherV2 operator is used for forward network. If the sparse strategy wants to be executed on the host, set the target to the CPU. The sparse feature is under continuous development.

If parameters are not grouped, the weight_decay in optimizer will be applied on the network parameters without ‘beta’ or ‘gamma’ in their names. Users can group parameters to change the strategy of decaying weight. When parameters are grouped, each group can set weight_decay, if not, the weight_decay in optimizer will be applied.

Parameters
  • params (Union[list[Parameter], list[dict]]) –

    Must be list of Parameter or list of dict. When the params is a list of dict, the string “params”, “weight_decay”, “grad_centralization” and “order_params” are the keys can be parsed.

    • params: Required. Parameters in current group. The value must be a list of Parameter.

    • lr: Using different learning rate by grouping parameters is currently not supported.

    • weight_decay: Optional. If “weight_decay” in the keys, the value of corresponding weight decay will be used. If not, the weight_decay in the optimizer will be used.

    • grad_centralization: Optional. Must be Boolean. If “grad_centralization” is in the keys, the set value will be used. If not, the grad_centralization is False by default. This configuration only works on the convolution layer.

    • order_params: Optional. When parameters is grouped, this usually is used to maintain the order of parameters that appeared in the network to improve performance. The value should be parameters whose order will be followed in optimizer. If order_params in the keys, other keys will be ignored and the element of ‘order_params’ must be in one group of params.

  • initial_accum (float) – The starting value for accumulators m, must be zero or positive values. Default: 0.1.

  • learning_rate (float) – The learning rate value, must be zero or positive, dynamic learning rate is currently not supported. Default: 0.001.

  • lr_power (float) – Learning rate power controls how the learning rate decreases during training, must be less than or equal to zero. Use fixed learning rate if lr_power is zero. Default: -0.5.

  • l1 (float) – l1 regularization strength, must be greater than or equal to zero. Default: 0.0.

  • l2 (float) – l2 regularization strength, must be greater than or equal to zero. Default: 0.0.

  • use_locking (bool) – If true, use locks for updating operation. Default: False.

  • loss_scale (float) – Value for the loss scale. It must be greater than 0.0. In general, use the default value. Only when FixedLossScaleManager is used for training and the drop_overflow_update in FixedLossScaleManager is set to False, then this value needs to be the same as the loss_scale in FixedLossScaleManager. Refer to class mindspore.FixedLossScaleManager for more details. Default: 1.0.

  • weight_decay (Union[float, int]) – Weight decay value to multiply weight, must be zero or positive value. Default: 0.0.

Inputs:
  • grads (tuple[Tensor]) - The gradients of params in the optimizer, the shape is the same as the params in optimizer.

Outputs:

Tuple[Parameter], the updated parameters, the shape is the same as params.

Raises
  • TypeError – If initial_accum, learning_rate, lr_power, l1, l2 or loss_scale is not a float.

  • TypeError – If element of parameters is neither Parameter nor dict.

  • TypeError – If weight_decay is neither float nor int.

  • TypeError – If use_nesterov is not a bool.

  • ValueError – If lr_power is greater than 0.

  • ValueError – If loss_scale is less than or equal to 0.

  • ValueError – If initial_accum, l1 or l2 is less than 0.

Supported Platforms:

Ascend GPU

Examples

>>> from mindspore import nn, Model
>>>
>>> net = Net()
>>> #1) All parameters use the same learning rate and weight decay
>>> optim = nn.FTRL(params=net.trainable_params())
>>>
>>> #2) Use parameter groups and set different values
>>> conv_params = list(filter(lambda x: 'conv' in x.name, net.trainable_params()))
>>> no_conv_params = list(filter(lambda x: 'conv' not in x.name, net.trainable_params()))
>>> group_params = [{'params': conv_params, 'weight_decay': 0.01, 'grad_centralization':True},
...                 {'params': no_conv_params},
...                 {'order_params': net.trainable_params()}]
>>> optim = nn.FTRL(group_params, learning_rate=0.1, weight_decay=0.0)
>>> # The conv_params's parameters will use default learning rate of 0.1 and weight decay of 0.01 and grad
>>> # centralization of True.
>>> # The no_conv_params's parameters will use default learning rate of 0.1 will use default weight decay
>>> # of 0.0 and grad centralization of False.
>>> # The final parameters order in which the optimizer will be followed is the value of 'order_params'.
>>>
>>> loss = nn.SoftmaxCrossEntropyWithLogits()
>>> model = Model(net, loss_fn=loss, optimizer=optim)
property target

The property is used to determine whether the parameter is updated on host or device. The input type is str and can only be ‘CPU’, ‘Ascend’ or ‘GPU’.