mindspore.communication.comm_func.batch_isend_irecv

View Source On Gitee
mindspore.communication.comm_func.batch_isend_irecv(p2p_op_list)[source]

Batch send and recv tensors asynchronously.

Note

  • The 'isend' and 'irecv' of P2POp in p2p_op_list between ranks need to match each other.

  • P2POp in p2p_op_list can only use the same communication group.

  • tag of P2POp in p2p_op_list is not support yet.

  • tensor of P2POp in p2p_op_list will not be modified by result inplace.

  • Only support PyNative mode, Graph mode is not currently supported.

Parameters

p2p_op_list (P2POp) – list contains P2POp. P2POp is type of mindspore.communication.comm_func.P2POp

Returns

tuple(Tensor). Output tensors is corresponding to p2p_op_list. At P2POp with 'isend' position, output tensor is a fake tensor with scalar, which has no meaning. At P2POp with 'irecv' position, output tensor is a tensor received from remote device.

Raises

TypeError – If p2p_op_list are not all type of P2POp.

Supported Platforms:

Ascend

Examples

Note

Before running the following examples, you need to configure the communication environment variables.

For Ascend/GPU/CPU devices, it is recommended to use the msrun startup method without any third-party or configuration file dependencies. Please see the msrun start up for more details.

This example should be run with 2 devices.

>>> import numpy as np
>>> import mindspore as ms
>>> import mindspore.communication as comm
>>>
>>> comm.init()
>>> this_rank = comm.get_rank()
>>> world_size = comm.get_group_size()
>>> next_rank = (this_rank + 1) % world_size
>>> prev_rank = (this_rank + world_size - 1) % world_size
>>>
>>> send_tensor = ms.Tensor(this_rank + 1, dtype=mindspore.float32)
>>> recv_tensor = ms.Tensor(0., dtype=mindspore.float32)
>>>
>>> send_op = comm.comm_func.P2POp('isend', send_tensor, next_rank)
>>> recv_op = comm.comm_func.P2POp('irecv', recv_tensor, prev_rank)
>>>
>>> p2p_op_list = [send_op, recv_op]
>>> output = comm.comm_func.batch_isend_irecv(p2p_op_list)
>>> print(output)
rank 0:
(Tensor(shape=[], dtype=Float32, value= 0), Tensor(shape=[], dtype=Float32, value= 2))
rank 1:
(Tensor(shape=[], dtype=Float32, value= 0), Tensor(shape=[], dtype=Float32, value= 1))