mindformers.wrapper.MFPipelineWithLossScaleCell

View Source On Gitee
class mindformers.wrapper.MFPipelineWithLossScaleCell(network, optimizer, use_clip_grad=True, max_grad_norm=1.0, scale_sense=1.0, micro_batch_num=1, local_norm=False, **kwargs)[source]

Append a train-one-step cell with loss scale of pipeline parallel for MindFormers.

Parameters
  • network (Cell) – The training network. Note that loss function should have been added.

  • optimizer (Optimizer) – Optimizer for updating the weights.

  • use_clip_grad (bool, optional) – Whether to use gradient clipping. Default: True .

  • max_grad_norm (float, optional) – Maximum gradient constraint value. Default: 1.0 .

  • scale_sense (Union[Tensor, Cell], optional) – Cell to do the loss scale. Default: 1.0 .

  • micro_batch_num (int, optional) – Micro batch number of pipeline parallel. Default: 1 .

  • local_norm (bool, optional) – Whether to calculate the local norm. Default: False .

  • kwargs (Any) – Additional parameters.

Inputs:
  • (*inputs) (Tuple(Tensor)) - Tuple of input tensors with shape \((N, \ldots)\).

Outputs:

Tuple of 5 or 7 Tensor, the loss, overflow flag, current loss scale value, learning rate, global grads norm, local grads norm and size of local norm grads.

  • loss (Tensor) - A scalar, the loss value.

  • overflow (Tensor) - A scalar, whether overflow occur or not, the type is bool.

  • loss scale (Tensor) - The loss scale value, the shape is \(()\) or \((1,)\).

  • learning rate (Tensor) - A scalar, the learning rate of the optimizer.

  • global norm (Tensor) - A scalar, the global norm of all grads, only be calculated when use_clip_grad=True, otherwise None.

  • local_norm (Tensor) - The local norm of the grads by group, only be returned when local_norm=True.

  • size (Tensor) - The sizes of each grads group, only be returned when local_norm=True.

Raises
  • TypeError – If scale_sense is neither Cell nor Tensor.

  • ValueError – If shape of scale_sense is neither (1,) nor ().

  • ValueError – If the parallel mode is not one of [ParallelMode.SEMI_AUTO_PARALLEL, ParallelMode.AUTO_PARALLEL].