mindspore.communication.comm_func

Collection communication functional interface

Note that the APIs in the following list need to preset communication environment variables.

For Ascend/GPU/CPU devices, it is recommended to use the msrun startup method without any third-party or configuration file dependencies. Please see the msrun start up for more details.

API Name

Description

Supported Platforms

mindspore.communication.comm_func.all_gather_into_tensor

Gathers tensors from the specified communication group and returns the tensor which is all gathered.

Ascend

mindspore.communication.comm_func.all_reduce

Reduce tensors across all devices in such a way that all deviceswill get the same final result, returns the tensor which is all reduced.

Ascend

mindspore.communication.comm_func.all_to_all_single_with_output_shape

scatter and gather input with split size to/from all rank, and return result in a single tensor.

Ascend

mindspore.communication.comm_func.all_to_all_with_output_shape

scatter and gather list of tensor to/from all rank according to input/output tensor list.

Ascend

mindspore.communication.comm_func.barrier

Synchronizes all processes in the specified group.

Ascend

mindspore.communication.comm_func.batch_isend_irecv

Batch send and recv tensors asynchronously.

Ascend

mindspore.communication.comm_func.broadcast

Broadcasts the tensor to the whole group.

Ascend GPU

mindspore.communication.comm_func.gather_into_tensor

Gathers tensors from the specified communication group.

Ascend

mindspore.communication.comm_func.irecv

Receive tensors from src asynchronously.

Ascend

mindspore.communication.comm_func.isend

Send tensors to the specified dest_rank asynchronously.

Ascend

mindspore.communication.comm_func.recv

Receive tensors from src.

Ascend

mindspore.communication.comm_func.send

Send tensors to the specified dest_rank.

Ascend

mindspore.communication.comm_func.P2POp

Object for batch_isend_irecv input, to store information of "isend" and "irecv".

Ascend

mindspore.communication.comm_func.reduce

Reduces tensors across the processes in the specified communication group, sends the result to the target dst(global rank), and returns the tensor which is sent to the target process.

Ascend

mindspore.communication.comm_func.reduce_scatter_tensor

Reduces and scatters tensors from the specified communication group and returns the tensor which is reduced and scattered.

Ascend

mindspore.communication.comm_func.scatter_tensor

Scatter tensor evently across the processes in the specified communication group.

Ascend GPU