mindspore.communication.comm_func.P2POp

class mindspore.communication.comm_func.P2POp(op, tensor, peer, group=None, tag=0, *, recv_dtype=None)[source]

Object for batch_isend_irecv input, to store information of "isend" and "irecv".

Note

  • Allow pass-in recv shape rather than tensor when op is 'irecv'.

  • tensor will not be modified in-place by final result.

Parameters
  • op (Union[str, function]) – Only string of "isend" and "irecv" are allow. Or function of comm_func.isend and comm_func.irecv are allow.

  • tensor (Union[Tensor, Tuple(int)]) – tensor for sending/receiving or receive tensor shape when op is "irecv".

  • peer (int) – remote global rank for send/receive.

  • group (str, optional) – The communication group to work on. Default: GlobalComm.WORLD_COMM_GROUP , which means "hccl_world_group" in Ascend, and "nccl_world_group" in GPU.

  • tag (int, optional) – currently not supported yet. default: 0.

Keyword Arguments

recv_dtype (mindspore.dtype, optional) – when tensor is a tuple shape, this arg will be used and has to be configured. default: None

Returns

P2POp Object.

Raises
Supported Platforms:

Ascend

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore.communication.comm_func import P2POp, isend, irecv
>>> from mindspore import Tensor
>>> send_tensor = Tensor(1.)
>>> send_op = P2POp('isend', send_tensor, 1)
>>> send_op = P2POp(isend, send_tensor, 1)
>>> recv_tensor = Tensor(0.)
>>> recv_op = P2POp('irecv', recv_tensor, 0)
>>> recv_op = P2POp(irecv, recv_tensor, 0)
>>> recv_op = P2POp('irecv', (), 0, recv_dtype=mindspore.float32)