Document feedback

Question document fragment

When a question document fragment contains a formula, it is displayed as a space.

Submission type
issue

It's a little complicated...

I'd like to ask someone.

PR

Just a small problem.

I can fix it online!

Please select the submission type

Problem type
Specifications and Common Mistakes

- Specifications and Common Mistakes:

- Misspellings or punctuation mistakes,incorrect formulas, abnormal display.

- Incorrect links, empty cells, or wrong formats.

- Chinese characters in English context.

- Minor inconsistencies between the UI and descriptions.

- Low writing fluency that does not affect understanding.

- Incorrect version numbers, including software package names and version numbers on the UI.

Usability

- Usability:

- Incorrect or missing key steps.

- Missing main function descriptions, keyword explanation, necessary prerequisites, or precautions.

- Ambiguous descriptions, unclear reference, or contradictory context.

- Unclear logic, such as missing classifications, items, and steps.

Correctness

- Correctness:

- Technical principles, function descriptions, supported platforms, parameter types, or exceptions inconsistent with that of software implementation.

- Incorrect schematic or architecture diagrams.

- Incorrect commands or command parameters.

- Incorrect code.

- Commands inconsistent with the functions.

- Wrong screenshots.

- Sample code running error, or running results inconsistent with the expectation.

Risk Warnings

- Risk Warnings:

- Lack of risk warnings for operations that may damage the system or important data.

Content Compliance

- Content Compliance:

- Contents that may violate applicable laws and regulations or geo-cultural context-sensitive words and expressions.

- Copyright infringement.

Please select the type of question

Problem description

Describe the bug so that we can quickly locate the problem.

mindformers.core.ConstantWarmUpLR

View Source On Gitee
class mindformers.core.ConstantWarmUpLR(learning_rate: float, warmup_steps: int = None, warmup_lr_init: float = 0., warmup_ratio: float = None, total_steps: int = None, **kwargs)[source]

Constant Warm Up Learning Rate.

This learning rate strategy maintains a constant learning rate during the warm-up phase. It is particularly suitable for scenarios where a stable, lower learning rate is needed at the beginning of training to avoid issues such as gradient explosion, before transitioning to the main learning rate schedule.

During the warm-up phase, the learning rate is kept at a fixed value, denoted as ηwarmup . The formula for the learning rate during the warm-up phase is:

ηt=ηwarmup

Here, ηwarmup is the fixed learning rate applied during the warm-up steps, and t represents the current step.

After the warm-up phase concludes, the learning rate transitions to the main learning rate, denoted as ηmain . The formula for the learning rate after the transition is:

ηt=ηmain
Parameters
  • learning_rate (float) – Initial value of learning rate.

  • warmup_steps (int) – The number of warm up steps. Default: None.

  • warmup_lr_init (float) – Initial learning rate in warm up steps. Default: 0.

  • warmup_ratio (float) – Ratio of total training steps used for warmup. Default: None.

  • total_steps (int) – The number of warm up steps. Default: None.

Inputs:
  • global_step (int) - The global step.

Outputs:

Learning rate.

Examples

>>> import mindspore as ms
>>> from mindformers.core import ConstantWarmUpLR
>>>
>>> ms.set_context(mode=ms.GRAPH_MODE)
>>> total_steps = 20
>>> warmup_steps = 10
>>> learning_rate = 0.005
>>>
>>> constant_warmup = ConstantWarmUpLR(learning_rate=learning_rate,
...                                    warmup_steps=warmup_steps,
...                                    total_steps=total_steps)
>>> print(constant_warmup(1))
0.0005
>>> print(constant_warmup(15))
0.005