mindspore.ops.Broadcast

View Source On Gitee
class mindspore.ops.Broadcast(root_rank, group=GlobalComm.WORLD_COMM_GROUP)[source]

Broadcasts the tensor to the whole group.

Note

The tensors must have the same shape and format in all processes of the collection.

Parameters
  • root_rank (int) – Specifies the rank(global rank) of the process that broadcast the tensor. And only process root_rank will broadcast the tensor.

  • group (str, optional) – The communication group to work on. Default: GlobalComm.WORLD_COMM_GROUP .

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

tuple[Tensor], Tensor has the same shape of the input, i.e., \((x_1, x_2, ..., x_R)\). The contents depend on the data of the root_rank device.

Raises

TypeError – If root_rank is not an integer or group is not a string.

Supported Platforms:

Ascend GPU

Examples

Note

Before running the following examples, you need to configure the communication environment variables.

For Ascend/GPU/CPU devices, it is recommended to use the msrun startup method without any third-party or configuration file dependencies. Please see the msrun start up for more details.

This example should be run with 2 devices.

>>> import mindspore as ms
>>> from mindspore import Tensor
>>> from mindspore.communication import init
>>> import mindspore.nn as nn
>>> from mindspore import ops
>>> import numpy as np
>>>
>>> ms.set_context(mode=ms.GRAPH_MODE)
>>> init()
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.broadcast = ops.Broadcast(1)
...
...     def construct(self, x):
...         return self.broadcast((x,))
...
>>> input_x = Tensor(np.ones([2, 4]).astype(np.int32))
>>> net = Net()
>>> output = net(input_x)
>>> print(output)
(Tensor(shape[2,4], dtype=Int32, value=
[[1, 1, 1, 1],
 [1, 1, 1, 1]]),)
Tutorial Examples: