mindspore.mint.distributed.init_process_group

View Source On Gitee
mindspore.mint.distributed.init_process_group(backend='hccl', init_method=None, timeout=None, world_size=- 1, rank=- 1, store=None, pg_options=None, device_id=None)[source]

Init collective communication lib. And create a default collective communication group.

Note

This method isn't supported in GPU and CPU versions of MindSpore. In Ascend hardware platforms, this API should be set before the definition of any Tensor and Parameter, and the instantiation and execution of any operation and net.

Parameters
  • backend (str, optional) – The backend to ues. default is hccl and now only support hccl.

  • init_method (str, invalid) – URL specifying how to init collective communication group. Provides parameters consistent with pytorch, but is not currently support, setting is invalid.

  • timeout (timedelta, invalid) – Timeout for API executed. Provides parameters consistent with pytorch, but is not currently support, setting is invalid.

  • world_size (int, optional) – Number of the processes participating in the job.

  • rank (int, invalid) – Rank of the current process. Provides parameters consistent with pytorch, but is not currently support, setting is invalid.

  • store (Store, invalid) – Key/Value store accessible to all workers, used to exchange connection/address information. Provides parameters consistent with pytorch, but is not currently support, setting is invalid.

  • pg_options (ProcessGroupOptions, invalid) – process group options specifying what additional options need to be passed in during the construction of specific process group. Provides parameters consistent with pytorch, but is not currently support, setting is invalid.

  • device_id (int, invalid) – the device id to exeute. Provides parameters consistent with pytorch, but is not currently support, setting is invalid.

Raises
  • ValueError – If backend is not hccl.

  • ValueError – If world_size is not equal to -1 or process group number.

  • RuntimeError – If device target is invalid, or backend is invalid, or distributed initialization fails, or the environment variables RANK_ID/MINDSPORE_HCCL_CONFIG_PATH have not been exported when backend is HCCL.

Supported Platforms:

Ascend

Examples

Note

Before running the following examples, you need to configure the communication environment variables.

For Ascend devices, it is recommended to use the msrun startup method without any third-party or configuration file dependencies. Please see the msrun start up for more details.

>>> import mindspore as ms
>>> from mindspore import set_context
>>> from mindspore.mint.distributed import init_process_group, destroy_process_group
>>> set_context(device_target="Ascend")
>>> init_process_group()
>>> destroy_process_group()