mindspore.communication.comm_func.isend

View Source On Gitee
mindspore.communication.comm_func.isend(tensor, dst=0, group=GlobalComm.WORLD_COMM_GROUP, tag=0)[source]

Send tensors to the specified dest_rank asynchronously.

Note

Send and Receive must be used in combination and have same tag. Only support PyNative mode, Graph mode is not currently supported.

Parameters
  • tensor (Tensor) – The shape of tensor is \((x_1, x_2, ..., x_R)\).

  • dst (int, optional) – A required integer identifying the destination rank(global rank). Default: 0.

  • group (str, optional) – The communication group to work on. Default: "hccl_world_group" on Ascend, "nccl_world_group" on GPU.

  • tag (int, optional) – A required integer identifying the send/recv message tag. The message will be received by the Receive op with the same "tag". Default: 0.

Returns

CommHandle, it is an async work handle.

Raises
  • TypeErrordst is not an int or group is not a str。

  • ValueError – If the rank ID of the process is greater than the rank size of the communication group.

Supported Platforms:

Ascend

Examples

Note

Before running the following examples, you need to configure the communication environment variables.

For Ascend/GPU/CPU devices, it is recommended to use the msrun startup method without any third-party or configuration file dependencies. Please see the msrun start up for more details.

This example should be run with 2 devices.

>>> from mindspore import ops
>>> import mindspore.nn as nn
>>> from mindspore.communication import init
>>> from mindspore.communication.comm_func import isend
>>> from mindspore import Tensor
>>> import numpy as np
>>>
>>> init()
>>> input_ = Tensor(np.ones([2, 8]).astype(np.float32))
>>> handle = isend(input_, 0)
>>> handle.wait()