mindspore.communication.comm_func.irecv

View Source On Gitee
mindspore.communication.comm_func.irecv(tensor, src=0, group=GlobalComm.WORLD_COMM_GROUP, tag=0)[source]

Receive tensors from src asynchronously.

Note

Send and Receive must be used in combination and have same tag. The shape and dtype of input tensor is used to receive tensor, but the value of input tensor would not take effect. Only support PyNative mode, Graph mode is not currently supported.

Parameters
  • tensor (Tensor) – The shape of tensor is \((x_1, x_2, ..., x_R)\). The shape and dtype of this tensor is used to receive tensor, but the value of input tensor would not take effect.

  • src (int, optional) – A required integer identifying the source rank(global rank). Default: 0.

  • group (str, optional) – The communication group to work on. Default: "hccl_world_group" on Ascend, "nccl_world_group" on GPU.

  • tag (int, optional) – A required integer identifying the send/recv message tag. The message will be received by the Send op with the same "tag". Default: 0.

Returns

Tuple(Tensor, CommHandle), the shape of output is \((x_1, x_2, ..., x_R)\). CommHandle is an async work handle, if async_op is set to True. CommHandle will be None, when async_op is False.

Raises
  • TypeError – If src is not an int or group is not a str.

  • ValueError – If the rank ID of the process is greater than the rank size of the communication group.

Supported Platforms:

Ascend GPU

Examples

Note

Before running the following examples, you need to configure the communication environment variables.

For Ascend/GPU/CPU devices, it is recommended to use the msrun startup method without any third-party or configuration file dependencies. Please see the msrun start up for more details.

This example should be run with 2 devices.

>>> from mindspore import ops
>>> import mindspore.nn as nn
>>> from mindspore.communication import init
>>> from mindspore.communication.comm_func import irecv
>>> from mindspore import Tensor
>>> import numpy as np
>>>
# Launch 2 processes.
Process 0 send the following array to Process 1
[[ 0.  1.]
 [ 2.  3.]]
>>> init()
>>> x = ms.Tensor(np.zeros([2, 2]))
# Process 1 receive tensor from Process 0.
>>> out, handle = irecv(x, src=0)
>>> handle.wait()
>>> print(out)
[[ 0.  1.]
 [ 2.  3.]]