mindspore.ops.AllGather
- class mindspore.ops.AllGather(group=GlobalComm.WORLD_COMM_GROUP)[source]
Gathers tensors from the specified communication group.
Note
The tensors must have the same shape and format in all processes of the collection.
- Parameters
group (str) – The communication group to work on. Default: “GlobalComm.WORLD_COMM_GROUP”.
- Inputs:
input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).
- Outputs:
Tensor. If the number of devices in the group is N, then the shape of output is \((N, x_1, x_2, ..., x_R)\).
- Raises
TypeError – If group is not a str.
ValueError – If the local rank id of the calling process in the group is larger than the group’s rank size.
- Supported Platforms:
Ascend
GPU
Examples
Note
Before running the following examples, you need to configure the communication environment variables.
For the Ascend devices, users need to prepare the rank table, set rank_id and device_id. Please see the Ascend tutorial for more details.
For the GPU devices, users need to prepare the host file and mpi, please see the GPU tutorial .
This example should be run with 2 devices.
>>> import numpy as np >>> import mindspore as ms >>> import mindspore.ops as ops >>> import mindspore.nn as nn >>> from mindspore.communication import init >>> from mindspore import Tensor >>> >>> ms.set_context(mode=ms.GRAPH_MODE) >>> init() >>> class Net(nn.Cell): ... def __init__(self): ... super(Net, self).__init__() ... self.allgather = ops.AllGather() ... ... def construct(self, x): ... return self.allgather(x) ... >>> input_x = Tensor(np.ones([2, 8]).astype(np.float32)) >>> net = Net() >>> output = net(input_x) >>> print(output) [[1. 1. 1. 1. 1. 1. 1. 1.] [1. 1. 1. 1. 1. 1. 1. 1.] [1. 1. 1. 1. 1. 1. 1. 1.] [1. 1. 1. 1. 1. 1. 1. 1.]]