mindspore.communication.comm_func.broadcast

mindspore.communication.comm_func.broadcast(tensor, src=0, group=GlobalComm.WORLD_COMM_GROUP)[source]

Broadcasts the tensor to the whole group.

Note

The tensors must have the same shape and format in all processes of the collection. Only support PyNative mode, Graph mode is not currently supported.

Parameters
  • tensor (Tensor) – The tensor to be broadcasted. The shape of tensor is \((x_1, x_2, ..., x_R)\).

  • src (int, optional) – Specifies the rank(global rank) of the process that broadcast the tensor. And only process src will broadcast the tensor.

  • group (str, optional) – The communication group to work on. Default: GlobalComm.WORLD_COMM_GROUP.

Returns

Tensor, tensor has the same shape as input tensor \((x_1, x_2, ..., x_R)\).

Raises
  • TypeError – If src is not an integer or group is not a string.

  • RuntimeError – If device target is invalid, or backend is invalid, or distributed initialization fails.

Supported Platforms:

Ascend GPU

Examples

Note

Before running the following examples, you need to configure the communication environment variables.

For Ascend/GPU/CPU devices, it is recommended to use the msrun startup method without any third-party or configuration file dependencies. Please see the msrun start up for more details.

This example should be run with 2 devices.

>>> import mindspore as ms
>>> from mindspore import Tensor
>>> from mindspore.communication import init
>>> from mindspore.communication.comm_func import broadcast
>>> import numpy as np
>>> # Launch 2 processes.
>>>
>>> init()
>>> data = ms.Tensor(np.arange(8).reshape([2, 4]).astype(np.float32))
>>> out = broadcast(tensor=data, src=0)
[[0. 1. 2. 3.]
 [4. 5. 6. 7.]]
Tutorial Examples: