Document feedback

Question document fragment

When a question document fragment contains a formula, it is displayed as a space.

Submission type
issue

It's a little complicated...

I'd like to ask someone.

PR

Just a small problem.

I can fix it online!

Please select the submission type

Problem type
Specifications and Common Mistakes

- Specifications and Common Mistakes:

- Misspellings or punctuation mistakes,incorrect formulas, abnormal display.

- Incorrect links, empty cells, or wrong formats.

- Chinese characters in English context.

- Minor inconsistencies between the UI and descriptions.

- Low writing fluency that does not affect understanding.

- Incorrect version numbers, including software package names and version numbers on the UI.

Usability

- Usability:

- Incorrect or missing key steps.

- Missing main function descriptions, keyword explanation, necessary prerequisites, or precautions.

- Ambiguous descriptions, unclear reference, or contradictory context.

- Unclear logic, such as missing classifications, items, and steps.

Correctness

- Correctness:

- Technical principles, function descriptions, supported platforms, parameter types, or exceptions inconsistent with that of software implementation.

- Incorrect schematic or architecture diagrams.

- Incorrect commands or command parameters.

- Incorrect code.

- Commands inconsistent with the functions.

- Wrong screenshots.

- Sample code running error, or running results inconsistent with the expectation.

Risk Warnings

- Risk Warnings:

- Lack of risk warnings for operations that may damage the system or important data.

Content Compliance

- Content Compliance:

- Contents that may violate applicable laws and regulations or geo-cultural context-sensitive words and expressions.

- Copyright infringement.

Please select the type of question

Problem description

Describe the bug so that we can quickly locate the problem.

mindspore.experimental.optim.RAdam

View Source On Gitee
class mindspore.experimental.optim.RAdam(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0.0)[source]

Implements RAdam algorithm.

input:γ (lr),β1,β2 (betas),θ0 (params),f(θ) (objective),λ (weightdecay),ϵ (epsilon)initialize:m00 ( first moment),v00 ( second moment),ρ2/(1β2)1fort=1todogtθft(θt1)ifλ0gtgt+λθt1mtβ1mt1+(1β1)gtvtβ2vt1+(1β2)gt2mt^mt/(1β1t)ρtρ2tβ2t/(1β2t)ifρt>5lt(1β2t)vt+ϵrt(ρt4)(ρt2)ρ(ρ4)(ρ2)ρtθtθt1γmt^rtltelseθtθt1γmt^returnθt

Warning

This is an experimental optimizer API that is subject to change. This module must be used with lr scheduler module in LRScheduler Class .

Parameters
  • params (Union[list(Parameter), list(dict)]) – list of parameters to optimize or dicts defining parameter groups.

  • lr (Union[int, float, Tensor], optional) – learning rate. Default: 1e-3.

  • betas (Tuple[float, float], optional) – coefficients used for computing running averages of gradient and its square. Default: (0.9, 0.999).

  • eps (float, optional) – term added to the denominator to improve numerical stability. Default: 1e-8.

  • weight_decay (float, optional) – weight decay (L2 penalty). Default: 0.0.

Inputs:
  • gradients (tuple[Tensor]) - The gradients of params.

Raises
  • ValueError – If the learning rate is not int, float or Tensor.

  • ValueError – If the learning rate is less than 0.

  • ValueError – If the eps is less than 0.0.

  • ValueError – If the weight_decay is less than 0.

  • ValueError – If elements of betas not in the range of [0, 1).

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> from mindspore import nn
>>> from mindspore.experimental import optim
>>> # Define the network structure of LeNet5. Refer to
>>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py
>>> net = LeNet5()
>>> loss_fn = nn.SoftmaxCrossEntropyWithLogits(sparse=True)
>>> optimizer = optim.RAdam(net.trainable_params(), lr=0.1)
>>> def forward_fn(data, label):
...     logits = net(data)
...     loss = loss_fn(logits, label)
...     return loss, logits
>>> grad_fn = mindspore.value_and_grad(forward_fn, None, optimizer.parameters, has_aux=True)
>>> def train_step(data, label):
...     (loss, _), grads = grad_fn(data, label)
...     optimizer(grads)
...     return loss