mindformers.core.MFLossMonitor

View Source On Gitee
class mindformers.core.MFLossMonitor(learning_rate: Optional[Union[float, LearningRateSchedule]] = None, per_print_times: int = 1, micro_batch_num: int = 1, micro_batch_interleave_num: int = 1, origin_epochs: int = None, dataset_size: int = None, initial_epoch: int = 0, initial_step: int = 0, global_batch_size: int = 0, gradient_accumulation_steps: int = 1)[source]

Monitor loss and other parameters in training process.

Parameters
  • learning_rate (Union[float, LearningRateSchedule], optional) – The learning rate schedule. Default: None.

  • per_print_times (int) – Every how many steps to print the log information. Default: 1.

  • micro_batch_num (int) – MicroBatch size for Pipeline Parallel. Default: 1.

  • micro_batch_interleave_num (int) – split num of batch size. Default: 1.

  • origin_epochs (int) – Training epoches. Default: None.

  • dataset_size (int) – Training dataset size. Default: None.

  • initial_epoch (int) – The beginning epoch. Default: 0.

  • initial_step (int) – The beginning step. Default: 0.

  • global_batch_size (int) – The total batch size. Default: 0.

  • gradient_accumulation_steps (int) – The gradient accumulation steps. Default: 1.

Examples

>>> from mindformers.core import MFLossMonitor
>>> lr = [0.01, 0.008, 0.006, 0.005, 0.002]
>>> monitor = MFLossMonitor(learning_rate=lr, per_print_times=10)