Differences with torch.distributed.broadcast
torch.distributed.broadcast
torch.distributed.broadcast(
tensor,
src=0,
group=None,
async_op=False
)
For more information, see torch.distributed.broadcast。
mindspore.communication.comm_func.broadcast
mindspore.communication.comm_func.broadcast(tensor, src=0, group=GlobalComm.WORLD_COMM_GROUP)
For more information, see mindspore.communication.comm_func.broadcast。
Differences
API function of MindSpore is not consistent with that of PyTorch.
PyTorch:the inputs contains the tensor
to be broadcast or received, the rank(global rank) src
of the process that broadcast the tensor, the communication group
to work on, and the async op flag async_op
. The process will broadcast the tensor if it is the src
process, otherwise it will receive the Tensor. The return is a async work handle if async_op=True
, otherwise is None
.
MindSpore: The inputs contains the tensor
to be broadcast, the rank(global rank) src
of the process that broadcast the tensor, the communication group
to work on, and it will return the tensor with the same shape with the broadcasted tensor. The async op flag async_op
and the working device lists device_ids
are not supported.
Categories |
Subcategories |
PyTorch |
MindSpore |
Difference |
---|---|---|---|---|
Parameters |
Parameters 1 |
tensor |
tensor |
PyTorch:tensor to be broadcast in |
Parameters 2 |
src |
src |
No difference |
|
Parameters 3 |
group |
group |
No difference |
|
Parameters 4 |
async_op |
- |
PyTorch: the async op flag. MindSpore: does not have this parameter. |