Differences with torch.distributed.all_reduce
torch.distributed.all_reduce
torch.distributed.all_reduce(
tensor,
op=<ReduceOp.SUM: 0>,
group=None,
async_op=False
)
For more information, see torch.distributed.all_reduce.
mindspore.ops.AllReduce
class mindspore.ops.AllReduce(
op=ReduceOp.SUM,
group=GlobalComm.WORLD_COMM_GROUP
)(input_x)
For more information, see mindspore.ops.AllReduce.
Differences
PyTorch: The inputs are the tensor broadcasted by the current process tensor
, the AllReduce operation op
, the communication group group
and the async op flag async_op
. After the AllReduce operation, the output is written back to tensor
. The return is a async work handle if async_op=True
, otherwise is None
.
MindSpore: The input of this interface is input_x
that is a tensor
. The output tensor
has the same shape as input_x
, and is generated after the AllReduce operation configured by op
in the communication group group
. This interface currently not support the configuration of async_op
.
Class |
Sub-class |
PyTorch |
MindSpore |
Difference |
---|---|---|---|---|
Parameters |
Parameter 1 |
tensor |
- |
PyTorch: the input tensor, and the output is written back to it after AllReduce operation. MindSpore does not have this parameter |
Parameter 2 |
op |
op |
No difference |
|
Parameter 3 |
group |
group |
No difference |
|
Parameter 4 |
async_op |
- |
PyTorch: the async op flag. MindSpore does not have this parameter |
|
Input |
Single input |
- |
input_x |
PyTorch: not applied. MindSpore: the input tensor of AllReduce. |