mindspore.nn.MicroBatchInterleaved

View Source On Gitee
class mindspore.nn.MicroBatchInterleaved(network, interleave_num=2)[source]

This function splits the input at the 0th into interleave_num pieces and then performs the computation of the wrapped cell. Application scenario: When there is model parallelism in semi-automatic mode and network, if the first slice data is calculating forward, the second slice data will execute the communication operators at the same time, to achieve the performance acceleration of communication and computing concurrency.

Note

The output of the input network must be a single tensor.

Parameters
  • network (Cell) – The target network to wrap.

  • interleave_num (int, optional) – split num of batch size. Default: 2 .

Inputs:

tuple[Tensor]. It's the same with the input of the network .

Outputs:

Tensor. The output of the input network .

Supported Platforms:

Ascend GPU

Examples

>>> import mindspore.nn as nn
>>> # Define the network structure of LeNet5. Refer to
>>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py
>>> net = LeNet5()
>>> net = nn.MicroBatchInterleaved(net, 2)