mindspore.communication.comm_func.all_to_all_single_with_output_shape

View Source On Gitee
mindspore.communication.comm_func.all_to_all_single_with_output_shape(output_shape, tensor, output_split_sizes=None, input_split_sizes=None, group=None)[source]

scatter and gather input with split size to/from all rank, and return result in a single tensor.

Note

'output_shape' and 'tensor' shape should be match across ranks. Only support PyNative mode, Graph mode is not currently supported.

Parameters
  • output_shape (Union(Tensor, Tuple(int))) – shape to indicate the shape of tensor gathered concatenated from remote rank.

  • tensor (Tensor) – tensor to be scattered to remote rank.

  • output_split_sizes (Union(Tuple(int), List(int))) – output split size at dim 0. If set to None, it means equally split by world_size. Default: None.

  • input_split_sizes (Union(Tuple(int), List(int))) – input split size at dim 0. If set to None, it means equally split by world_size. Default: None.

  • group (str, optional) – The communication group to work on. Default: None, which means "hccl_world_group" on Ascend, "nccl_world_group" on GPU.

Returns

Tensor, the tensors gathered concatenated from remote ranks. If the numel of tensor gathered from remote is zero, it will return a Tensor will value 0, which has no actual meanning.

Raises
  • TypeError – If tensor is not tensor.

  • TypeError – If output_shape is not tuple or tensors.

Supported Platforms:

Ascend

Examples

Note

Before running the following examples, you need to configure the communication environment variables.

For Ascend/GPU/CPU devices, it is recommended to use the msrun startup method without any third-party or configuration file dependencies. Please see the msrun start up for more details.

This example should be run with 2 devices.

>>> import numpy as np
>>> import mindspore
>>> from mindspore.communication import init, get_rank, get_group_size
>>> from mindspore.communication.comm_func import all_to_all_single_with_output_shape
>>> from mindspore import Tensor
>>> from mindspore.ops import zeros
>>>
>>> init()
>>> this_rank = get_rank()
>>> if this_rank == 0:
>>>     output_shape = (3, 3)
>>>     tensor = Tensor([[0, 1, 2.], [3, 4, 5], [6, 7, 8]])
>>>     result = all_to_all_single_with_output_shape(output_shape, tensor, [2, 1], [2, 1])
>>> if this_rank == 1:
>>>     output_shape = (2, 3)
>>>     tensor = Tensor([[9, 10., 11], [12, 13, 14]])
>>>     result = all_to_all_single_with_output_shape(output_shape, tensor)
>>> print(result)
rank 0:
[[ 0.  1.  2.]
 [ 3.  4.  5.]
 [ 9. 10. 11.]]
rank 1:
[[ 6.  7.  8.]
 [12. 13. 14.]]