Document feedback

Question document fragment

When a question document fragment contains a formula, it is displayed as a space.

Submission type
issue

It's a little complicated...

I'd like to ask someone.

PR

Just a small problem.

I can fix it online!

Please select the submission type

Problem type
Specifications and Common Mistakes

- Specifications and Common Mistakes:

- Misspellings or punctuation mistakes,incorrect formulas, abnormal display.

- Incorrect links, empty cells, or wrong formats.

- Chinese characters in English context.

- Minor inconsistencies between the UI and descriptions.

- Low writing fluency that does not affect understanding.

- Incorrect version numbers, including software package names and version numbers on the UI.

Usability

- Usability:

- Incorrect or missing key steps.

- Missing main function descriptions, keyword explanation, necessary prerequisites, or precautions.

- Ambiguous descriptions, unclear reference, or contradictory context.

- Unclear logic, such as missing classifications, items, and steps.

Correctness

- Correctness:

- Technical principles, function descriptions, supported platforms, parameter types, or exceptions inconsistent with that of software implementation.

- Incorrect schematic or architecture diagrams.

- Incorrect commands or command parameters.

- Incorrect code.

- Commands inconsistent with the functions.

- Wrong screenshots.

- Sample code running error, or running results inconsistent with the expectation.

Risk Warnings

- Risk Warnings:

- Lack of risk warnings for operations that may damage the system or important data.

Content Compliance

- Content Compliance:

- Contents that may violate applicable laws and regulations or geo-cultural context-sensitive words and expressions.

- Copyright infringement.

Please select the type of question

Problem description

Describe the bug so that we can quickly locate the problem.

mindspore.mint.optim.SGD

View Source On Gitee
class mindspore.mint.optim.SGD(params, lr, momentum=0, dampening=0, weight_decay=0, nesterov=False, *, maximize=False)[source]

Stochastic Gradient Descent optimizer.

vt+1=uvt+gradient(1dampening)

If nesterov is True:

pt+1=ptlr(gradient+uvt+1)

If nesterov is False:

pt+1=ptlrvt+1

To be noticed, for the first step, vt+1=gradient.

Here : p, v and u denote the parameters, accum, and momentum respectively.

Warning

This is an experimental optimizer API, which may be modified or removed in the future. This module must be used with lr scheduler module in LRScheduler Class .

Parameters
  • params (Union[list(Parameter), list(dict)]) – list of parameters to optimize or dicts defining parameter groups.

  • lr (Union[bool, int, float, Tensor]) – learning rate.

  • momentum (Union[bool, int, float], optional) – momentum factor. Default: 0.

  • weight_decay (Union[bool, int, float], optional) – weight decay (L2 penalty). Must be greater than or equal to 0. Default: 0..

  • dampening (Union[bool, int, float], optional) – dampening for momentum. Default: 0.

  • nesterov (bool, optional) – enable Nesterov momentum. If Nesterov is utilized, the momentum must be positive, and the damping must be equal to 0. Default: False.

Keyword Arguments

maximize (bool, optional) – maximize the params based on the objective, instead of minimizing. Default: False.

Inputs:
  • gradients (tuple[Tensor]) - The gradients of params.

Raises
  • ValueError – If the learning rate is not bool, int, float or Tensor.

  • ValueError – If the learning rate is less than 0.

  • ValueError – If the momentum or weight_decay value is less than 0.0.

  • ValueError – If the momentum, dampening or weight_decay value is not bool, int or float.

  • ValueError – If the nesterov and maximize are not bool.

  • ValueError – If the nesterov is true, momentum is not positive or dampening is not 0.

Supported Platforms:

Ascend

Examples

>>> import mindspore
>>> from mindspore import mint
>>> from mindspore.mint import optim
>>> # Define the network structure of LeNet5. Refer to
>>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py
>>> net = LeNet5()
>>> loss_fn = nn.SoftmaxCrossEntropyWithLogits(sparse=True)
>>> optimizer = optim.SGD(net.trainable_params(), lr=0.1)
>>> def forward_fn(data, label):
...     logits = net(data)
...     loss = loss_fn(logits, label)
...     return loss, logits
>>> grad_fn = mindspore.value_and_grad(forward_fn, None, optimizer.parameters, has_aux=True)
>>> def train_step(data, label):
...     (loss, _), grads = grad_fn(data, label)
...     optimizer(grads)
...     return loss