mindspore.ops.Send
- class mindspore.ops.Send(sr_tag, dest_rank, group=GlobalComm.WORLD_COMM_GROUP, group_back=GlobalComm.WORLD_COMM_GROUP)[source]
Send tensors to the specified dest_rank.
Note
Send and Receive must be used in combination and have same sr_tag.
- Parameters
sr_tag (int) – The tag to identify the send/recv message. The message will be received by the Receive op with the same "sr_tag".
dest_rank (int) – A required integer identifying the destination rank.
group (str, optional) – The communication group to work on. Default:
GlobalComm.WORLD_COMM_GROUP
.group_back (str, optional) – The communication group for backpropagation. Default:
GlobalComm.WORLD_COMM_GROUP
.
- Inputs:
input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).
- Raises
TypeError – If group is not a str.
RuntimeError – If device target is invalid, or backend is invalid, or distributed initialization fails.
ValueError – If the local rank id of the calling process in the group is larger than the group's rank size.
- Supported Platforms:
Ascend
GPU
Examples
Note
Before running the following examples, you need to configure the communication environment variables.
For Ascend/GPU/CPU devices, it is recommended to use the msrun startup method without any third-party or configuration file dependencies. Please see the msrun start up for more details.
This example should be run with 2 devices.
>>> import numpy as np >>> import mindspore.nn as nn >>> from mindspore.communication import init >>> from mindspore import Tensor >>> from mindspore import ops >>> >>> init() >>> class SendNet(nn.Cell): >>> def __init__(self): >>> super(SendNet, self).__init__() >>> self.depend = ops.Depend() >>> self.send = ops.Send(sr_tag=0, dest_rank=8, group="hccl_world_group") >>> >>> def construct(self, x): >>> out = self.depend(x, self.send(x)) >>> return out >>> >>> input_ = Tensor(np.ones([2, 8]).astype(np.float32)) >>> net = Net() >>> output = net(input_)
- Tutorial Examples: