mindspore.ops.AllReduce
- class mindspore.ops.AllReduce(*args, **kwargs)[source]
Reduces the tensor data across all devices in such a way that all devices will get the same final result.
Note
The operation of AllReduce does not support “prod” currently. The tensors must have the same shape and format in all processes of the collection.
- Parameters
- Inputs:
input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).
- Outputs:
Tensor, has the same shape of the input, i.e., \((x_1, x_2, ..., x_R)\). The contents depend on the specified operation.
- Raises
TypeError – If any of op and group is not a str, or fusion is not an integer, or the input’s dtype is bool.
ValueError – If the op is “prod”.
- Supported Platforms:
Ascend
GPU
Examples
>>> from mindspore.communication import init >>> from mindspore import Tensor >>> from mindspore.ops.operations.comm_ops import ReduceOp >>> import mindspore.nn as nn >>> import mindspore.ops.operations as ops >>> >>> init() >>> class Net(nn.Cell): ... def __init__(self): ... super(Net, self).__init__() ... self.allreduce_sum = ops.AllReduce(ReduceOp.SUM, group="nccl_world_group") ... ... def construct(self, x): ... return self.allreduce_sum(x) ... >>> input_ = Tensor(np.ones([2, 8]).astype(np.float32)) >>> net = Net() >>> output = net(input_) >>> print(output) [[4. 5. 6. 0. 0. 0. 0. 0.] [0. 0. 0. 0. 0. 0. 0. 0.]]