mindspore.ops.AllReduce
- class mindspore.ops.AllReduce(op=ReduceOp.SUM, group=GlobalComm.WORLD_COMM_GROUP)[source]
Reduces the tensor data across all devices in such a way that all devices will get the same final result.
Note
The operation of AllReduce does not support “prod” currently. The tensors must have the same shape and format in all processes of the collection.
- Parameters
- Inputs:
input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).
- Outputs:
Tensor, has the same shape of the input, i.e., \((x_1, x_2, ..., x_R)\). The contents depend on the specified operation.
- Raises
TypeError – If any of op and group is not a str, or fusion is not an integer, or the input’s dtype is bool.
ValueError – If the op is “prod”.
- Supported Platforms:
Ascend
GPU
Examples
>>> # This example should be run with two devices. Refer to the tutorial > Distributed Training on mindspore.cn >>> import numpy as np >>> from mindspore.communication import init >>> from mindspore import Tensor >>> from mindspore.ops import ReduceOp >>> import mindspore.nn as nn >>> import mindspore.ops as ops >>> >>> init() >>> class Net(nn.Cell): ... def __init__(self): ... super(Net, self).__init__() ... self.allreduce_sum = ops.AllReduce(ReduceOp.SUM) ... ... def construct(self, x): ... return self.allreduce_sum(x) ... >>> input_ = Tensor(np.ones([2, 8]).astype(np.float32)) >>> net = Net() >>> output = net(input_) >>> print(output) [[2. 2. 2. 2. 2. 2. 2. 2.] [2. 2. 2. 2. 2. 2. 2. 2.]]