MindSpore Distributed Operator List

Linux Ascend GPU CPU Model Development Beginner Intermediate Expert

View Source On Gitee

Distributed Operator

op name

constraints

mindspore.ops.Abs

None

mindspore.ops.ACos

None

mindspore.ops.Acosh

None

mindspore.ops.Add

None

mindspore.ops.ApproximateEqual

None

mindspore.ops.ArgMaxWithValue

When the input_x is splited on the axis dimension, the distributed result may be inconsistent with that on the single machine.

mindspore.ops.ArgMinWithValue

When the input_x is splited on the axis dimension, the distributed result may be inconsistent with that on the single machine.

mindspore.ops.Asin

None

mindspore.ops.Asinh

None

mindspore.ops.Assign

None

mindspore.ops.AssignAdd

None

mindspore.ops.AssignSub

None

mindspore.ops.Atan

None

mindspore.ops.Atan2

None

mindspore.ops.Atanh

None

mindspore.ops.BatchMatMul

transpore_a=True is not supported.

mindspore.ops.BesselI0e

None

mindspore.ops.BesselI1e

None

mindspore.ops.BiasAdd

None

mindspore.ops.BroadcastTo

None

mindspore.ops.Cast

The shard strategy is ignored in the Auto Parallel and Semi Auto Parallel mode.

mindspore.ops.Ceil

None

mindspore.ops.Concat

The input_x can’t be split into the dimension of axis, otherwise it’s inconsistent with the single machine in the mathematical logic.

mindspore.ops.Cos

None

mindspore.ops.Cosh

None

mindspore.ops.Div

None

mindspore.ops.DivNoNan

None

mindspore.ops.DropoutDoMask

Need to be used in conjunction with DropoutGenMask,configuring shard strategy is not supported.

mindspore.ops.DropoutGenMask

Need to be used in conjunction with DropoutDoMask.

mindspore.ops.Elu

None

mindspore.ops.EmbeddingLookup

The same as GatherV2.

mindspore.ops.Equal

None

mindspore.ops.Erf

None

mindspore.ops.Erfc

None

mindspore.ops.Exp

None

mindspore.ops.ExpandDims

None

mindspore.ops.Expm1

None

mindspore.ops.Floor

None

mindspore.ops.FloorDiv

None

mindspore.ops.FloorMod

None

mindspore.ops.GatherV2

Only support 1-dim and 2-dim parameters and the last dimension of the input_params should be 32-byte aligned; Scalar input_indices is not supported; Repeated calculation is not supported when the parameters are split in the dimension of the axis; Split input_indices and input_params at the same time is not supported.

mindspore.ops.Gather

Only support 1-dim and 2-dim parameters and the last dimension of the input_params should be 32-byte aligned; Scalar input_indices is not supported; Repeated calculation is not supported when the parameters are split in the dimension of the axis; Split input_indices and input_params at the same time is not supported.

mindspore.ops.Gelu

None

mindspore.ops.GeLU

None

mindspore.ops.Greater

None

mindspore.ops.GreaterEqual

None

mindspore.ops.Inv

None

mindspore.ops.L2Normalize

The input_x can’t be split into the dimension of axis, otherwise it’s inconsistent with the single machine in the mathematical logic.

mindspore.ops.Less

None

mindspore.ops.LessEqual

None

mindspore.ops.LogicalAnd

None

mindspore.ops.LogicalNot

None

mindspore.ops.LogicalOr

None

mindspore.ops.Log

None

mindspore.ops.Log1p

None

mindspore.ops.LogSoftmax

The logits can’t be split into the dimension of axis, otherwise it’s inconsistent with the single machine in the mathematical logic.

mindspore.ops.MatMul

transpose_a=True is not supported.

mindspore.ops.Maximum

None

mindspore.ops.Minimum

None

mindspore.ops.Mod

None

mindspore.ops.Mul

None

mindspore.ops.Neg

None

mindspore.ops.NotEqual

None

mindspore.ops.OneHot

Only support 1-dim indices. Must configure strategy for the output and the first and second inputs.

mindspore.ops.OnesLike

None

mindspore.ops.Pack

None

mindspore.ops.Pow

None

mindspore.ops.PReLU

When the shape of weight is not [1], the shard strategy in channel dimension of input_x should be consistent with weight.

mindspore.ops.RealDiv

None

mindspore.ops.Reciprocal

None

mindspore.ops.ReduceMax

When the input_x is splited on the axis dimension, the distributed result may be inconsistent with that on the single machine.

mindspore.ops.ReduceMin

When the input_x is splited on the axis dimension, the distributed result may be inconsistent with that on the single machine.

mindspore.ops.ReduceSum

None

mindspore.ops.ReduceMean

None

mindspore.ops.ReLU

None

mindspore.ops.ReLU6

None

mindspore.ops.ReLUV2

None

mindspore.ops.Reshape

Configuring shard strategy is not supported. In auto parallel mode, if multiple operators are followed by the reshape operator, different shard strategys are not allowed to be configured for these operators.

mindspore.ops.Round

None

mindspore.ops.Rsqrt

None

mindspore.ops.Sigmoid

None

mindspore.ops.SigmoidCrossEntropyWithLogits

None

mindspore.ops.Sign

None

mindspore.ops.Sin

None

mindspore.ops.Sinh

None

mindspore.ops.Softmax

The logits can’t be split into the dimension of axis, otherwise it’s inconsistent with the single machine in the mathematical logic.

mindspore.ops.SoftmaxCrossEntropyWithLogits

The last dimension of logits and labels can’t be splited; Only supports using output[0].

mindspore.ops.Softplus

None

mindspore.ops.Softsign

None

mindspore.ops.SparseGatherV2

The same as GatherV2.

mindspore.ops.Split

The input_x can’t be split into the dimension of axis, otherwise it’s inconsistent with the single machine in the mathematical logic.

mindspore.ops.Sqrt

None

mindspore.ops.Square

None

mindspore.ops.Squeeze

None

mindspore.ops.StridedSlice

Only support mask with all 0 values; The dimension needs to be split should be all extracted; Split is supported when the strides of dimension is 1.

mindspore.ops.Slice

The dimension needs to be split should be all extracted.

mindspore.ops.Sub

None

mindspore.ops.Tan

None

mindspore.ops.Tanh

None

mindspore.ops.TensorAdd

None

mindspore.ops.Tile

Only support configuring shard strategy for multiples.

mindspore.ops.TopK

The input_x can’t be split into the last dimension, otherwise it’s inconsistent with the single machine in the mathematical logic.

mindspore.ops.Transpose

None

mindspore.ops.Unique

Only support the repeat calculate shard strategy (1,).

mindspore.ops.UnsortedSegmentSum

The shard of input_x and segment_ids must be the same as the dimension of segment_ids.

mindspore.ops.UnsortedSegmentMin

The shard of input_x and segment_ids must be the same as the dimension of segment_ids. Note that if the segment id i is missing, then the output[i] will be filled with the maximum of the input type. The user needs to mask the maximum value to avoid value overflow. The communication operation such as AllReudce will raise an Run Task Error due to overflow.

mindspore.ops.UnsortedSegmentMax

The shard of input_x and segment_ids must be the same as the dimension of segment_ids. Note that if the segment id i is missing, then the output[i] will be filled with the minimum of the input type. The user needs to mask the minimum value to avoid value overflow. The communication operation such as AllReudce will raise an Run Task Error due to overflow.

mindspore.ops.ZerosLike

None

Repeated calculation means that the device is not fully used. For example, the cluster has 8 devices to run distributed training, the splitting strategy only cuts the input into 4 copies. In this case, double counting will occur.