mindspore.ops.ReduceOp
- class mindspore.ops.ReduceOp[source]
Operation options for reducing tensors. This is an enumerated type, not an operator.
The main calling methods are as follows:
SUM: ReduceOp.SUM.
MAX: ReduceOp.MAX.
MIN: ReduceOp.MIN.
PROD: ReduceOp.PROD.
There are four kinds of operation options, “SUM”, “MAX”, “MIN”, and “PROD”.
SUM: Take the sum.
MAX: Take the maximum.
MIN: Take the minimum.
PROD: Take the product.
- Supported Platforms:
Ascend
GPU
Examples
Note
Before running the following examples, you need to configure the communication environment variables.
For the Ascend devices, users need to prepare the rank table, set rank_id and device_id. Please see the rank table Startup for more details.
For the GPU devices, users need to prepare the host file and mpi, please see the mpirun Startup .
For the CPU device, users need to write a dynamic cluster startup script, please see the Dynamic Cluster Startup .
This example should be run with multiple devices.
>>> import numpy as np >>> import mindspore >>> from mindspore.communication import init >>> from mindspore import Tensor, ops, nn >>> from mindspore.ops import ReduceOp >>> >>> init() >>> class Net(nn.Cell): ... def __init__(self): ... super(Net, self).__init__() ... self.allreduce_sum = ops.AllReduce(ReduceOp.SUM) ... ... def construct(self, x): ... return self.allreduce_sum(x) ... >>> input_ = Tensor(np.ones([2, 8]).astype(np.float32)) >>> net = Net() >>> output = net(input_) >>> print(output) [[2. 2. 2. 2. 2. 2. 2. 2.] [2. 2. 2. 2. 2. 2. 2. 2.]]