mindspore.communication.init

View Source On Gitee
mindspore.communication.init(backend_name=None)[source]

Initialize distributed backends required by communication services, e.g. "hccl" / "nccl" / "mccl". It is usually used in distributed parallel scenarios and set before using communication services.

Note

  • The full name of "hccl" is Huawei Collective Communication Library(HCCL).

  • The full name of "nccl" is NVIDIA Collective Communication Library(NCCL).

  • The full name of "mccl" is MindSpore Collective Communication Library(MCCL).

  • In Ascend hardware platforms, init() should be set before the definition of any Tensor and Parameter, and the instantiation and execution of any operation and net.

Parameters

backend_name (str) – Backend, using "hccl" / "nccl" / "mccl". "hccl" should be used for Ascend hardware platforms, "nccl" for GPU hardware platforms and "mccl" for CPU hardware platforms. If not set, inference is automatically made based on the hardware platform type (device_target). Default: None .

Raises
  • TypeError – If backend_name is not a string.

  • RuntimeError – If device target is invalid, or backend is invalid, or distributed initialization fails, or the environment variables RANK_ID/MINDSPORE_HCCL_CONFIG_PATH have not been exported when backend is HCCL.

Supported Platforms:

Ascend GPU CPU

Examples

Note

Before running the following examples, you need to configure the communication environment variables.

For Ascend/GPU/CPU devices, it is recommended to use the msrun startup method without any third-party or configuration file dependencies. Please see the msrun start up for more details.

>>> from mindspore.communication import init
>>> init()