mindformers.core.PolynomialWithWarmUpLR

View Source On Gitee
class mindformers.core.PolynomialWithWarmUpLR(learning_rate: float, total_steps: int, warmup_steps: int = None, lr_end: float = 1e-7, power: float = 1.0, warmup_lr_init: float = 0., warmup_ratio: float = None, decay_steps: int = None, **kwargs)[source]

Polynomial with Warm Up Learning Rate.

At the beginning of training, the learning rate gradually increases from a lower initial value, \(\eta_{\text{warmup}}\) , to the starting learning rate, \(\eta_{\text{start}}\) . The change in learning rate during the warm-up phase, depending on the step \(t\) , is described by the following formula:

\[\eta_t = \eta_{\text{warmup}} + t \times \frac{\eta_{\text{start}} - \eta_{\text{warmup}}}{\text{warmup_steps}}\]

where \(\text{warmup\_steps}\) represents the total number of steps in the warm-up phase.

After the warm-up phase concludes, the learning rate gradually decays according to a polynomial function, reaching the final learning rate, \(\eta_{\text{end}}\) . The change in learning rate over the total number of steps \(\text{total\_steps}\) is given by the formula:

\[\eta_t = \eta_{\text{end}} + (\eta_{\text{start}} - \eta_{\text{end}}) \times \left(1 - \frac{t - \text{warmup_steps}}{\text{decay_steps}}\right)^{\text{power}}\]

where \(\text{power}\) is the exponent of the polynomial, controlling the decay rate.

This learning rate strategy is well-suited for scenarios where a stable learning rate is needed during the early stages of training, with a gradual decrease in the later stages. By preventing gradient explosion initially and reducing the learning rate during the latter part of training, it helps the model achieve better generalization as it converges.

Parameters
  • learning_rate (float) – Initial value of learning rate.

  • total_steps (int) – The number of total steps.

  • warmup_steps (int) – The number of warm up steps.

  • lr_end (float) – Final value of learning rate. Default: 1e-7.

  • power (float) – The power of the polynomial. Default: 1.0.

  • warmup_lr_init (float) – Initial learning rate in warm up steps. Default: 0.

  • warmup_ratio (float) – Ratio of total training steps used for warmup. Default: None.

  • decay_steps (int) – The number of decay steps, which must be smaller than total_steps - warmup_steps. If the value is None, decay steps will be total_steps - warmup_steps. Default: None.

Inputs:
  • global_step (int) - The global step.

Outputs:

Learning rate.

Examples

>>> import mindspore as ms
>>> from mindformers.core import PolynomialWithWarmUpLR
>>>
>>> ms.set_context(mode=ms.GRAPH_MODE)
>>> total_steps = 20
>>> warmup_steps = 10
>>> learning_rate = 0.005
>>> lr_end = 0.0000001
>>>
>>> polynomial_warmup = PolynomialWithWarmUpLR(learning_rate=learning_rate,
...                                            warmup_steps=warmup_steps,
...                                            total_steps=total_steps,
...                                            lr_end=lr_end)
>>> print(polynomial_warmup(1))
0.0005
>>> print(polynomial_warmup(15))
0.0025000498