mindspore.experimental.optim.Adagrad
- class mindspore.experimental.optim.Adagrad(params, lr=0.01, lr_decay=0.0, weight_decay=0.0, initial_accumulator_value=0.0, eps=1e-10, *, maximize=False)[source]
Implements Adagrad algorithm.
\[\begin{split}\begin{aligned} &\rule{110mm}{0.4pt} \\ &\textbf{input} : \gamma \text{ (lr)}, \: \theta_0 \text{ (params)}, \: f(\theta) \text{ (objective)}, \: \lambda \text{ (weight decay)}, \\ &\hspace{12mm} \tau \text{ (initial accumulator value)}, \: \eta\text{ (lr decay)}\\ &\textbf{initialize} : state\_sum_0 \leftarrow 0 \\[-1.ex] &\rule{110mm}{0.4pt} \\ &\textbf{for} \: t=1 \: \textbf{to} \: \ldots \: \textbf{do} \\ &\hspace{5mm}g_t \leftarrow \nabla_{\theta} f_t (\theta_{t-1}) \\ &\hspace{5mm} \tilde{\gamma} \leftarrow \gamma / (1 +(t-1) \eta) \\ &\hspace{5mm} \textbf{if} \: \lambda \neq 0 \\ &\hspace{10mm} g_t \leftarrow g_t + \lambda \theta_{t-1} \\ &\hspace{5mm}state\_sum_t \leftarrow state\_sum_{t-1} + g^2_t \\ &\hspace{5mm}\theta_t \leftarrow \theta_{t-1}- \tilde{\gamma} \frac{g_t}{\sqrt{state\_sum_t}+\epsilon} \\ &\rule{110mm}{0.4pt} \\[-1.ex] &\bf{return} \: \theta_t \\[-1.ex] &\rule{110mm}{0.4pt} \\[-1.ex] \end{aligned}\end{split}\]Warning
This is an experimental optimizer API that is subject to change. This module must be used with lr scheduler module in LRScheduler Class .
- Parameters
params (Union[list(Parameter), list(dict)]) – list of parameters to optimize or dicts defining parameter groups.
lr (Union[int, float, Tensor], optional) – learning rate. Default:
1e-2
.lr_decay (float, optional) – learning rate decay. Default:
0.
.weight_decay (float, optional) – weight decay (L2 penalty). Default:
0.
.initial_accumulator_value (float, optional) – the initial accumulator value. Default:
0.
.eps (float, optional) – term added to the denominator to improve numerical stability. Default:
1e-10
.
- Keyword Arguments
maximize (bool, optional) – maximize the params based on the objective, instead of minimizing. Default:
False
.
- Inputs:
gradients (tuple[Tensor]) - The gradients of params.
- Raises
ValueError – If the learning rate is not int, float or Tensor.
ValueError – If the learning rate is less than 0.
ValueError – If the learning rate decay is less than 0.
ValueError – If the weight_decay is less than 0.
ValueError – If the initial_accumulator_value is less than 0.0.
ValueError – If the eps is less than 0.0.
- Supported Platforms:
Ascend
GPU
CPU
Examples
>>> import mindspore >>> from mindspore import nn >>> from mindspore.experimental import optim >>> # Define the network structure of LeNet5. Refer to >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> loss_fn = nn.SoftmaxCrossEntropyWithLogits(sparse=True) >>> optimizer = optim.Adagrad(net.trainable_params(), lr=0.1) >>> def forward_fn(data, label): ... logits = net(data) ... loss = loss_fn(logits, label) ... return loss, logits >>> grad_fn = mindspore.value_and_grad(forward_fn, None, optimizer.parameters, has_aux=True) >>> def train_step(data, label): ... (loss, _), grads = grad_fn(data, label) ... optimizer(grads) ... return loss