mindspore.communication.comm_func
Collection communication functional interface
Note that the APIs in the following list need to preset communication environment variables.
For Ascend/GPU/CPU devices, it is recommended to use the msrun startup method without any third-party or configuration file dependencies. Please see the msrun start up for more details.
API Name |
Description |
Supported Platforms |
Gathers tensors from the specified communication group and returns the tensor which is all gathered. |
|
|
Reduce tensors across all devices in such a way that all deviceswill get the same final result, returns the tensor which is all reduced. |
|
|
|
scatter and gather input with split size to/from all rank, and return result in a single tensor. |
|
|
scatter and gather list of tensor to/from all rank according to input/output tensor list. |
|
Synchronizes all processes in the specified group. |
|
|
Batch send and recv tensors asynchronously. |
|
|
Broadcasts the tensor to the whole group. |
|
|
Gathers tensors from the specified communication group. |
|
|
Receive tensors from src asynchronously. |
|
|
Send tensors to the specified dest_rank asynchronously. |
|
|
Receive tensors from src. |
|
|
Send tensors to the specified dest_rank. |
|
|
Object for batch_isend_irecv input, to store information of |
|
|
Reduces tensors across the processes in the specified communication group, sends the result to the target dst(global rank), and returns the tensor which is sent to the target process. |
|
|
Reduces and scatters tensors from the specified communication group and returns the tensor which is reduced and scattered. |
|
|
Scatter tensor evently across the processes in the specified communication group. |
|