mindspore.mint.distributed.send

View Source On Gitee
mindspore.mint.distributed.send(tensor, dst=0, group=None, tag=0)[source]

Send tensors to the specified dest_rank.

Note

Only support PyNative mode, Graph mode is not currently supported.

Parameters
  • tensor (Tensor) – Tensor to send.

  • dst (int, optional) – A required integer identifying the destination rank(global rank). Default: 0.

  • group (str, optional) – The communication group to work on. If None, which means "hccl_world_group" in Ascend. Default: None.

  • tag (int, optional) – A required integer identifying the send/recv message tag. The message will be received by the Receive op with the same "tag". Default: 0. It is a reserved parameter currently.

Raises
  • TypeError – If the tensor is not Tensor, dst is not an int or group is not a str.

  • ValueError – If the dst process rank id is same as the current process.

Supported Platforms:

Ascend

Examples

Note

Before running the following examples, you need to configure the communication environment variables.

For Ascend devices, it is recommended to use the msrun startup method without any third-party or configuration file dependencies. Please see the msrun start up for more details.

This example should be run with 2 devices.

>>> from mindspore.mint.distributed import init_process_group
>>> from mindspore.mint.distributed import send
>>> from mindspore import Tensor
>>> import numpy as np
>>>
>>> init_process_group()
>>> input_ = Tensor(np.ones([2, 8]).astype(np.float32))
# Launch 2 processes.
Process 0 send the array to Process 1
>>> send(input_, 1)