mindformers.wrapper.MFTrainOneStepCell
- class mindformers.wrapper.MFTrainOneStepCell(network, optimizer, use_clip_grad=False, max_grad_norm=1.0, scale_sense=1.0, local_norm=False, **kwargs)[source]
TrainOneStep For MindFormer. Network training with loss scaling, grad clip, gradient accumulation, exponential moving average and so on.
This is a training step with loss scaling. It takes a network, an optimizer and a scale update Cell(or a Tensor) as args. The loss scale value can be updated in both host side or device side. If you want to update it on host side, using a value of Tensor type as scale_sense, otherwise, using a Cell instance for updating loss scale as scale_sense.
- Parameters
network (Cell) – The training network. The network only supports single output.
optimizer (Cell) – Optimizer for updating the network parameters.
use_clip_grad (bool, optional) – Whether to use the gradient clipping function. Default:
False
.max_grad_norm (float, optional) – Maximum gradient value. Default:
1.0
.scale_sense (Union[Tensor, Cell], optional) – If this value is a Cell, it will be called by MFTrainOneStepCell to update loss scale. If this value is a Tensor, the loss scale can be modified by set_sense_scale, the shape should be \(()\) or \((1,)\).
local_norm (bool, optional) – Whether to calculate the local norm. Default:
False
.kwargs (Any) – Additional parameters.
- Inputs:
(*inputs) (Tuple(Tensor)) - Tuple of input tensors with shape \((N, \ldots)\).
- Outputs:
Tuple of 5 or 7 Tensor, the loss, overflow flag, current loss scale value, learning rate, global grads norm, local grads norm and size of local norm grads.
loss (Tensor) - A scalar, the loss value.
overflow (Tensor) - A scalar, whether overflow occur or not, the type is bool.
loss scale (Tensor) - The loss scale value, the shape is \(()\) or \((1,)\).
learning rate (Tensor) - A scalar, the learning rate of the optimizer.
global norm (Tensor) - A scalar, the global norm of all grads, only be calculated when use_clip_grad=True, otherwise None.
local_norm (Tensor) - The local norm of the grads by group, only be returned when local_norm=True.
size (Tensor) - The sizes of each grads group, only be returned when local_norm=True.
- Raises
TypeError – If scale_sense is neither Cell nor Tensor.
ValueError – If shape of scale_sense is neither (1,) nor ().
Examples
>>> from mindformers.models.llama import LlamaConfig, LlamaForCausalLM >>> from mindformers.wrapper import MFTrainOneStepCell >>> import mindspore as ms >>> from mindformers.core.optim import AdamW >>> import numpy as np >>> >>> ms.set_context(mode=ms.GRAPH_MODE) >>> >>> config = LlamaConfig(num_layers=2) >>> net = LlamaForCausalLM(config=config) >>> net.set_train(True) >>> optimizer = AdamW(net.trainable_params()) >>> >>> mft = MFTrainOneStepCell(net, optimizer) >>> inputs = ms.Tensor(np.ones([1, 2049]), ms.int32) >>> out = mft(inputs) >>> >>> loss, overflow, loss_scale, lr, global_norm = out >>> print(loss.shape, overflow, loss_scale, lr, global_norm) (1,) False 1.0 0.001, None