mindspore.experimental.optim.Adam

View Source On Gitee
class mindspore.experimental.optim.Adam(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0.0, amsgrad=False, *, maximize=False)[source]

Implements Adam algorithm.

The updating formulas are as follows:

input:γ (lr),β1,β2 (betas),θ0 (params),f(θ) (objective)λ (weight decay),amsgrad,maximizeinitialize:m00 ( first moment),v00 (second moment),v0^max0fort=1todoifmaximize:gtθft(θt1)elsegtθft(θt1)ifλ0gtgt+λθt1mtβ1mt1+(1β1)gtvtβ2vt1+(1β2)gt2mt^mt/(1β1t)vt^vt/(1β2t)ifamsgradvt^maxmax(vt^max,vt^)θtθt1γmt^/(vt^max+ϵ)elseθtθt1γmt^/(vt^+ϵ)returnθt

Warning

This is an experimental optimizer API that is subject to change. This module must be used with lr scheduler module in LRScheduler Class .

Parameters
  • params (Union[list(Parameter), list(dict)]) – list of parameters to optimize or dicts defining parameter groups

  • lr (Union[int, float, Tensor], optional) – learning rate. Default: 1e-3.

  • betas (Tuple[float, float], optional) – The exponential decay rate for the moment estimations. Default: (0.9, 0.999).

  • eps (float, optional) – term added to the denominator to improve numerical stability. Default: 1e-8.

  • weight_decay (float, optional) – weight decay (L2 penalty). Default: 0..

  • amsgrad (bool, optional) – whether to use the AMSGrad algorithm. Default: False.

Keyword Arguments

maximize (bool, optional) – maximize the params based on the objective, instead of minimizing. Default: False.

Inputs:
  • gradients (tuple[Tensor]) - The gradients of params.

Raises
Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> from mindspore import nn
>>> from mindspore.experimental import optim
>>> # Define the network structure of LeNet5. Refer to
>>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py
>>> net = LeNet5()
>>> loss_fn = nn.SoftmaxCrossEntropyWithLogits(sparse=True)
>>> optimizer = optim.Adam(net.trainable_params(), lr=0.1)
>>> def forward_fn(data, label):
...     logits = net(data)
...     loss = loss_fn(logits, label)
...     return loss, logits
>>> grad_fn = mindspore.value_and_grad(forward_fn, None, optimizer.parameters, has_aux=True)
>>> def train_step(data, label):
...     (loss, _), grads = grad_fn(data, label)
...     optimizer(grads)
...     return loss