# Usage Constraints During Operator Parallel

[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.4.1/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.4.1/docs/mindspore/source_en/api_python/operator_list_parallel.md)

| API name                                                     | constraints                                                  | Config layout constraints                                                  |
| :----------------------------------------------------------- | :----------------------------------------------------------- | :----------------------------------------------------------- |
| [mindspore.ops.Abs](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Abs.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.ACos](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.ACos.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.Acosh](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Acosh.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.Add](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Add.html) | None                                                         | Layout configuration is supported. The input layout should be the same or broadcastable. The output layout cannot be configured.                                     |
| [mindspore.ops.AddN](https://mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.AddN.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.ApproximateEqual](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.ApproximateEqual.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.ArgMaxWithValue](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.ArgMaxWithValue.html) | When the input_x is splited on the axis dimension, the distributed result may be inconsistent with that on the single machine. | Not support config layout                                 |
| [mindspore.ops.ArgMinWithValue](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.ArgMinWithValue.html) | When the input_x is splited on the axis dimension, the distributed result may be inconsistent with that on the single machine. | Not support config layout                                 |
| [mindspore.ops.Asin](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Asin.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.Asinh](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Asinh.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.Assign](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Assign.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.AssignAdd](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.AssignAdd.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.AssignSub](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.AssignSub.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.Atan](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Atan.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.Atan2](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Atan2.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.Atanh](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Atanh.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.AvgPool](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.AvgPool.html) | 1. The data format only supports 'NCHW'; <br />2. The shapes of output H/W dimension must be divisible by the split strategies of input H/W dimension;<br />3. If H/W is split: <br />    1) If the kernel_size <= stride, the input slice size must be divisible by stride;<br />    2) It does not support kernel_size > stride;<br />4. In auto_parallel mode, the dual recursive algorithm is not supported. | Not support config layout                                 |
| [mindspore.ops.AvgPool3D](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.AvgPool3D.html) | 1. The data format only supports 'NCDHW';<br />2. If data exchange between adjacent nodes is involved,  only Ascend is supported;<br />3. The W dimensions can not be split;<br />4. The output shape of D/H dimension must be divisible by the strategy of input D/H dimensions;<br />5. In valid mode: If D/H dimension is split:<br />    1) When the kernel_size <= stride, the input‘s slice shape of D/H dimension must be divisible by stride;<br />    2) It does not support that kernel_size > stride;<br />6. In the same/pad mode: If D/H dimension is split:<br />    1) If kernel_size >= stride, (Total input length including pad - kernel_size) must be divisible by stride. Otherwise, the pad must be 0 and the slice shape of D/H dimension must be divisible by stride;<br />    2) (Output length* stride - input length) must be divisible by strategy:<br />    3) The length of data sent and received between adjacent cards must be greater than or equal to 0 and less than or equal to the slice size;<br />7. In auto_parallel mode, the dual recursive algorithm is not supported. | Not support config layout                                 |
| [mindspore.ops.BatchMatMul](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.BatchMatMul.html) | Not support config layout                                 |
None                                                         | Not support config layout                                 |
| [mindspore.ops.BatchNorm](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.BatchNorm.html) | It does not support GPU.                                     | Not support config layout                                 |
| [mindspore.ops.BesselI0e](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.BesselI0e.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.BesselI1e](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.BesselI1e.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.BiasAdd](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.BiasAdd.html) | None                                                         | Support config layout. The second input, bias, should have ths same tensor layout as the last dimension of input_x. Output Layout is not open for configuration.                                 |
| [mindspore.ops.BitwiseAnd](https://mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.BitwiseAnd.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.BitwiseOr](https://mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.BitwiseOr.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.BitwiseXor](https://mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.BitwiseXor.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.BoundingBoxEncode](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.BoundingBoxEncode.html) | 1. The first dimension of input (anchor_box) and input (groundtruth_box) can be split; <br /> 2. The sharding strategies of input (anchor_box) and input (groundtruth_box) must be the same. | Not support config layout                                 |
| [mindspore.ops.BroadcastTo](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.BroadcastTo.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.Cast](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Cast.html) | The shard strategy is ignored in the Auto Parallel and Semi Auto Parallel mode. | Not support config layout                                 |
| [mindspore.ops.Cdist](https://mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Cdist.html) | 1. The strategy for 'B' dimension must be the same; <br /> 2.`M` dimension can't be split. | Not support config layout                                 |
| [mindspore.ops.Ceil](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Ceil.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.Concat](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Concat.html) | The input_x can't be split into the dimension of axis, otherwise it's inconsistent with the single machine in the mathematical logic. | Not support config layout                                 |
| [mindspore.ops.Conv2D](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Conv2D.html) | 1. The data format only supports 'NCHW';<br />2. If data exchange between adjacent nodes is involved, only Ascend is supported;<br />3. When the value of group is not 1, can not split C-in or C-out;<br />4. The last two dimensions of weight can not be split;<br />5. The output shape of H/W dimension must be divisible by the strategy of input H/W dimensions;<br />6. In valid mode: If H/W dimension is split:<br />    1) When the kernel_size <= stride (kernel_size is dilation *(kernel_size - 1) + 1, the same below), the input‘s slice shape of H/W dimension must be divisible by stride;<br />    2) It does not support that kernel_size > stride;<br />7. In the same/pad mode: If H/W dimension is split:<br />    1)  If kernel_size >= stride, (Total input length including pad - kernel_size) must be divisible by stride. Otherwise, the pad must be 0 and the slice shape of H/W dimension must be divisible by stride;<br />    2) (Output length* stride - input length) must be divisible by strategy:<br />    3) The length of data sent and received between adjacent cards must be greater than or equal to 0 and less than or equal to the slice size; | Not support config layout                                 |
| [mindspore.ops.Conv3D](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Conv3D.html) | 1. The data format only supports 'NCDHW';<br />2. If data exchange between adjacent nodes is involved, only Ascend is supported;<br />3. When the value of group is not 1, can not split C-in or C-out;<br />4. The W dimension and the last three dimensions of weight can not be split;<br />5. The output shape of D/H dimension must be divisible by the strategy of input D/H dimensions;<br />6. In valid mode: If D/H dimension is split:<br />    1) When the kernel_size <= stride (kernel_size is dilation *(kernel_size - 1) + 1, the same below), the input‘s slice shape of D/H dimension must be divisible by stride;<br />    2) It does not support that kernel_size > stride;<br />7. In the same/pad mode: If D/H dimension is split:<br />    1) If kernel_size >= stride, (Total input length including pad - kernel_size) must be divisible by stride. Otherwise, the pad must be 0 and the slice shape of D/H dimension must be divisible by stride;<br />    2) (Output length* stride - input length) must be divisible by strategy:<br />    3) The length of data sent and received between adjacent cards must be greater than or equal to 0 and less than or equal to the slice size;<br />8. In auto_parallel mode, the dual recursive algorithm is not supported. | Not support config layout                                 |
| [mindspore.ops.Cos](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Cos.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.Cosh](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Cosh.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.CropAndResize](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.CropAndResize.html) | 1. Sharding of the H/W dimension of input (x) and the second dimension of input (boxes) is not supported. <br /> 2. The shard strategy for the first dimension of inputs (boxes) and (box_index) must be the same. | Not support config layout                                 |
| [mindspore.ops.CumProd](https://mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.CumProd.html) | The `axis` dimension for `input` can't be split.             | Not support config layout                                 |
| [mindspore.ops.CumSum](https://mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.CumSum.html) | The same as CumProd.                                         | Not support config layout                                 |
| [mindspore.ops.Div](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Div.html) | None                                                         | Layout configuration is supported. The input layout should be the same or broadcastable. The output layout cannot be configured.                                     |
| [mindspore.ops.DivNoNan](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.DivNoNan.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.Dropout](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Dropout.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.Elu](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Elu.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.EmbeddingLookup](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.EmbeddingLookup.html) | The same as Gather.                                          | Not support config layout                                 |
| [mindspore.ops.Equal](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Equal.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.Erf](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Erf.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.Erfc](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Erfc.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.Erfinv](https://mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Erfinv.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.Exp](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Exp.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.ExpandDims](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.ExpandDims.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.Expm1](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Expm1.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.Floor](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Floor.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.FloorDiv](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.FloorDiv.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.FloorMod](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.FloorMod.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.Gamma](https://mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Gamma.html) | 1. Set the strategy for `shape`. e.g shape=(8, 16), the corresponding policy can be (2, 4); <br /> 2. The strategy for `alpha` and `beta` must be all-1; <br /> 3. When the setting for `shard` is not all-1 strategy, the result is inconsistent with standalone. | Not support config layout                                 |
| [mindspore.ops.Gather](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Gather.html) | 1. When batch_dims > 0:<br />1) The axis dimension of input_params can not be split;<br />2) Non-uniform split is not supported;<br />2. When batch_dims = 0:<br />1) If uniform split:<br />a) and if the axis dimensions of input_params is not split, other dimensions can be split;<br />b) If the axis dimensions of input_params is split: The input_params only supports 1 and 2 dimensions; The input_indices can not be scalar tensor; Does not support to split input_params and input_indices at the same time; When axis = 0 and the parameter is split in the dimension of axis, the output strategy can be configured. The legal output shard strategy is (indices_strategy, param_strategy[1:]) or ((indices_strategy[0]*param_strategy[0], indices_strategy[1:]), param_strategy[1:])<br />2) Non-uniform split:<br />a) Only support axis = 0;<br />b) The non-uniform split only represents the non-uniformity of the 0th dimension of input_params, and the last dimension of the params slice should be aligned by 32 bytes;<br />c) The number of slices in the 0th dimension of input_params should be equal to that of the last dimension of input_indices;<br />d) Each dimension of input_params can be split, but input_indices can only split the last dimension, and does not support repeated calculations;<br />e) Input_indices shall meet the following requirements: the Tensor value of the next slice shall be greater than that of the previous slice. | Not support config layout                                 |
| [mindspore.ops.GatherD](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.GatherD.html) | The dimension corresponding to dim cannot be segmented; In auto_parallel mode, the dual recursive algorithm is not supported. | Not support config layout                                 |
| [mindspore.ops.GatherNd](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.GatherNd.html) | The first input can't be split, and the last dimension of the second input can't be split; In auto_parallel mode, the dual recursive algorithm is not supported. | Not support config layout                                 |
| [mindspore.ops.GeLU](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.GeLU.html) | None                                                         | Support config input layout. Output Layout is not open for configuration.                                 |
| [mindspore.ops.Greater](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Greater.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.GreaterEqual](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.GreaterEqual.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.HShrink](https://mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.HShrink.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.HSigmoid](https://mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.HSigmoid.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.InplaceAdd](https://mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.InplaceAdd.html) | The first dimension of `x` and `input_v` can't be split.     | Not support config layout                                 |
| [mindspore.ops.InplaceSub](https://mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.InplaceSub.html) | The same as InplaceAdd.                                      | Not support config layout                                 |
| [mindspore.ops.InplaceUpdate](https://mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.InplaceUpdate.html) | The same as InplaceAdd.                                      | Not support config layout                                 |
| [mindspore.ops.Inv](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Inv.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.IOU](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.IOU.html) | The first dimension of the `anchor_boxes` and `gt_boxes` can be spilt. | Not support config layout                                 |
| [mindspore.ops.IsFinite](https://mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.IsFinite.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.KLDivLoss](https://mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.KLDivLoss.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.LayerNorm](https://mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.LayerNorm.html) | The strategy for the second input gamma and the third input beta needs to be equal to the input_x_strategy[begin_params.axis:], input_x_strategy is the strategy for the first input. | Support config layout. The layout configuration for the second input gamma and the third input beta needs to be equal to the input_x_layout_tuple[begin_params.axis:], input_x_layout_tuple is the layout configuration for the first input.  |
| [mindspore.ops.L2Loss](https://mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.L2Loss.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.L2Normalize](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.L2Normalize.html) | The input_x can't be split into the dimension of axis, otherwise it's inconsistent with the single machine in the mathematical logic. | Not support config layout                                 |
| [mindspore.ops.Lerp](https://mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Lerp.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.Less](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Less.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.LessEqual](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.LessEqual.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.LinSpace](https://mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.LinSpace.html) | You don't need to configure strategy for `start` and `end`. You just need to pass in a strategy of length 1 whose value divisible into `num`. | Not support config layout                                 |
| [mindspore.ops.LogicalAnd](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.LogicalAnd.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.LogicalNot](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.LogicalNot.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.LogicalOr](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.LogicalOr.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.Log](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Log.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.Log1p](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Log1p.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.LogSoftmax](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.LogSoftmax.html) | The logits can't be split into the dimension of axis, otherwise it's inconsistent with the single machine in the mathematical logic. | Not support config layout                                 |
| [mindspore.ops.MaskedFill](https://mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.MaskedFill.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.MatMul](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.MatMul.html) | 1. When `transpose_b=True` is set, the input's split strategy must be in the form of ((A, B), (C, B));<br />2. When `transpose_b=False` is set, the input's split strategy must be in the form of ((A, B), (B, C));<br />3. It is supported to set the output's split strategy, the legal output's split strategy is ((A, C),) or ((A * B, C),) | Support config layout.<br />  1. When `transpose_b=True` is set, the input's layout configuration must be in the form of (layout(A, B), layout(C, B)), A/B/C is the alias name of device axis or the alias name tuple;<br />2. When `transpose_b=False` is set, the input's layout configuration must be in the form of (layout(A, B), layout(B, C)), A/B/C is the alias name of device axis or the alias name tuple;<br />3. It is supported to config the output's layout, the legal output's layout configuration is (layout(A, C),) or (layout((A, B), C),), A/B/C is the alias name of device axis; When A is tuple of alias name (A1, A2),  the legal output's layout configuration is (layout((A1, A2), C),) or (layout((A1, A2, B), C),)                               |
| [mindspore.ops.Maximum](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Maximum.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.MaxPool](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.MaxPool.html) | 1. The data format only supports 'NCHW'; <br />2. The shapes of output H/W dimension must be divisible by the split strategies of input H/W dimension;<br />3. If H/W is split: <br />    1) If the kernel_size <= stride, the input slice size must be divisible by stride;<br />    2) It does not support kernel_size > stride;<br />4. In auto_parallel mode, the dual recursive algorithm is not supported. | Not support config layout                                 |
| [mindspore.ops.MaxPool3D](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.MaxPool3D.html) | The same as AvgPool3D.                                       | Not support config layout                                 |
| [mindspore.ops.Minimum](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Minimum.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.Mish](https://mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Mish.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.Mod](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Mod.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.Mul](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Mul.html) | None                                                         | Layout configuration is supported. The input layout should be the same or broadcastable. The output layout cannot be configured.                                     |
| [mindspore.ops.MulNoNan](https://mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.MulNoNan.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.Neg](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Neg.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.NotEqual](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.NotEqual.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.OneHot](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.OneHot.html) | Only support 1-dim indices. Must configure strategy for the output and the first and second inputs. | Not support config layout                                 |
| [mindspore.ops.OnesLike](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.OnesLike.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.Pow](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Pow.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.PReLU](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.PReLU.html) | When the shape of weight is not [1], the shard strategy in channel dimension of input_x should be consistent with weight. | Not support config layout                                 |
| [mindspore.ops.RandomChoiceWithMask](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.RandomChoiceWithMask.html) | Only the all-1 strategy is supported.                        | Not support config layout                                 |
| [mindspore.ops.RealDiv](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.RealDiv.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.Reciprocal](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Reciprocal.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.ReduceMax](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.ReduceMax.html) | When the input_x is splited on the axis dimension, the distributed result may be inconsistent with that on the single machine. | Not support config layout                                 |
| [mindspore.ops.ReduceMin](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.ReduceMin.html) | When the input_x is splited on the axis dimension, the distributed result may be inconsistent with that on the single machine. | Not support config layout                                 |
| [mindspore.ops.ReduceSum](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.ReduceSum.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.ReduceMean](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.ReduceMean.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.ReLU](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.ReLU.html) | None                                                         | Support config input layout. Output layout is not open for configuration.                                 |
| [mindspore.ops.ReLU6](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.ReLU6.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.Reshape](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Reshape.html) | Configuring sharding strategy is not supported. In auto parallel mode, if multiple operators are followed by the reshape operator, different shard strategys are not allowed to be configured for these operators. | Not support config layout                                 |
| [mindspore.ops.Rint](https://mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Rint.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.ResizeNearestNeighbor](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.ResizeNearestNeighbor.html) | When `align_corners=True` is set, only the first dimension and the second dimension can be split. | Not support config layout                                 |
| [mindspore.ops.ROIAlign](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.ROIAlign.html) | Sharding the H/W dimension of the input(features) and the second dimension of input(rois) is not supported. | Not support config layout                                 |
| [mindspore.ops.Round](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Round.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.Rsqrt](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Rsqrt.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.ScatterAdd](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.ScatterAdd.html) | The second input cannot be split, and the top n dimensions of the third input (n is the dimension of the second input) cannot be split; In auto_parallel mode, the dual recursive algorithm is not supported. | Not support config layout                                 |
| [mindspore.ops.ScatterDiv](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.ScatterDiv.html) | The second input cannot be split, and the top n dimensions of the third input (n is the dimension of the second input) cannot be split; In auto_parallel mode, the dual recursive algorithm is not supported. | Not support config layout                                 |
| [mindspore.ops.ScatterMax](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.ScatterMax.html) | The first dimension of the first input cannot be split, the second input cannot be split, and the top n dimensions of the third input (n is the dimension of the second input) cannot be split; In auto_parallel mode, the dual recursive algorithm is not supported. | Not support config layout                                 |
| [mindspore.ops.ScatterMin](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.ScatterMin.html) | The first dimension of the first input cannot be split, the second input cannot be split, and the top n dimensions of the third input (n is the dimension of the second input) cannot be split; In auto_parallel mode, the dual recursive algorithm is not supported. | Not support config layout                                 |
| [mindspore.ops.ScatterMul](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.ScatterMul.html) | The second input cannot be split, and the top n dimensions of the third input (n is the dimension of the second input) cannot be split; In auto_parallel mode, the dual recursive algorithm is not supported. | Not support config layout                                 |
| [mindspore.ops.ScatterNdAdd](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.ScatterNdAdd.html) | The second input cannot be split, the top n-1 dimension of the third input (n is the dimension of the second input) cannot be split, and the remaining k dimensions (excluding the top n-1 dimension) of the third input are consistent with the last k partitions of the first input; In auto_parallel mode, the dual recursive algorithm is not supported. | Not support config layout                                 |
| [mindspore.ops.ScatterNdSub](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.ScatterNdSub.html) | The second input cannot be split, the top n-1 dimension of the third input (n is the dimension of the second input) cannot be split, and the remaining k dimensions (excluding the top n-1 dimension) of the third input are consistent with the last k partitions of the first input; In auto_parallel mode, the dual recursive algorithm is not supported. | Not support config layout                                 |
| [mindspore.ops.ScatterNdUpdate](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.ScatterNdUpdate.html) | The top m dimension of the first input cannot be cut (m is the value of the last dimension of the second input indexes [- 1]). The second input cannot be split. The top n-1 dimension of the third input (n is the dimension of the second input) cannot be split. The partitions of the remaining k dimensions (excluding the top n-1 dimension) of the third input is consistent with the last k partitions of the first input; In auto_parallel mode, the dual recursive algorithm is not supported. | Not support config layout                                 |
| [mindspore.ops.ScatterSub](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.ScatterSub.html) | The second input cannot be split, and the top n dimensions of the third input (n is the dimension of the second input) cannot be split; In auto_parallel mode, the dual recursive algorithm is not supported. | Not support config layout                                 |
| [mindspore.ops.ScatterUpdate](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.ScatterUpdate.html) | The first dimension of first input can not be split, the second input can not  be split, and the first n dimensions (n is the dimension size of the second input) of the third input can not be split; In auto_parallel mode, the dual recursive algorithm is not supported. | Not support config layout                                 |
| [mindspore.ops.TensorScatterAdd](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.TensorScatterAdd.html) | The second input cannot be split, the top n-1 dimension of the third input (n is the dimension of the second input) cannot be split, and the remaining k dimensions (excluding the top n-1 dimension) of the third input are consistent with the last k partitions of the first input; In auto_parallel mode, the dual recursive algorithm is not supported. | Not support config layout                                 |
| [mindspore.ops.TensorScatterDiv](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.TensorScatterDiv.html) | The second input cannot be split, the top n-1 dimension of the third input (n is the dimension of the second input) cannot be split, and the remaining k dimensions (excluding the top n-1 dimension) of the third input are consistent with the last k partitions of the first input; In auto_parallel mode, the dual recursive algorithm is not supported. | Not support config layout                                 |
| [mindspore.ops.TensorScatterMax](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.TensorScatterMax.html) | The top m dimension of the first input cannot be cut (m is the value of the last dimension of the second input indexes [- 1]). The second input cannot be split. The top n-1 dimension of the third input (n is the dimension of the second input) cannot be split. The partitions of the remaining k dimensions (excluding the top n-1 dimension) of the third input is consistent with the last k partitions of the first input; In auto_parallel mode, the dual recursive algorithm is not supported. | Not support config layout                                 |
| [mindspore.ops.TensorScatterMax](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.TensorScatterMin.html) | The top m dimension of the first input cannot be cut (m is the value of the last dimension of the second input indexes [- 1]). The second input cannot be split. The top n-1 dimension of the third input (n is the dimension of the second input) cannot be split. The partitions of the remaining k dimensions (excluding the top n-1 dimension) of the third input is consistent with the last k partitions of the first input; In auto_parallel mode, the dual recursive algorithm is not supported. | Not support config layout                                 |
| [mindspore.ops.TensorScatterMul](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.TensorScatterMul.html) | The second input cannot be split, the top n-1 dimension of the third input (n is the dimension of the second input) cannot be split, and the remaining k dimensions (excluding the top n-1 dimension) of the third input are consistent with the last k partitions of the first input; In auto_parallel mode, the dual recursive algorithm is not supported. | Not support config layout                                 |
| [mindspore.ops.TensorScatterAdd](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.TensorScatterSub.html) | The second input cannot be split, the top n-1 dimension of the third input (n is the dimension of the second input) cannot be split, and the remaining k dimensions (excluding the top n-1 dimension) of the third input are consistent with the last k partitions of the first input; In auto_parallel mode, the dual recursive algorithm is not supported. | Not support config layout                                 |
| [mindspore.ops.TensorScatterUpdate](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.TensorScatterUpdate.html) | The top m dimension of the first input cannot be cut (m is the value of the last dimension of the second input indexes [- 1]). The second input cannot be split. The top n-1 dimension of the third input (n is the dimension of the second input) cannot be split. The partitions of the remaining k dimensions (excluding the top n-1 dimension) of the third input is consistent with the last k partitions of the first input; In auto_parallel mode, the dual recursive algorithm is not supported. | Not support config layout                                 |
| [mindspore.ops.Select](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Select.html) | In auto_parallel mode, the dual recursive algorithm is not supported. | Not support config layout                                 |
| [mindspore.ops.SeLU](https://mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.SeLU.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.Sigmoid](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Sigmoid.html) | None                                                         | Support config input layout. Output layout is not open for configuration.                                 |
| [mindspore.ops.SigmoidCrossEntropyWithLogits](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.SigmoidCrossEntropyWithLogits.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.Sign](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Sign.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.Sin](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Sin.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.Sinh](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Sinh.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.Softmax](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Softmax.html) | The logits can't be split into the dimension of axis, otherwise it's inconsistent with the single machine in the mathematical logic. | Support config input layout. Output layout is not open for configuration, and can't config layout on the dimension of axis.                                 |
| [mindspore.ops.SoftmaxCrossEntropyWithLogits](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.SoftmaxCrossEntropyWithLogits.html) | The last dimension of logits and labels can't be splited; Only supports using output[0]. | Not support config layout                                 |
| [mindspore.ops.Softplus](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Softplus.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.Softsign](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Softsign.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.SoftShrink](https://mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.SoftShrink.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.SparseGatherV2](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.SparseGatherV2.html) | The same as Gather.                                          | Not support config layout                                 |
| [mindspore.ops.Split](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Split.html) | The input_x can't be split into the dimension of axis, otherwise it's inconsistent with the single machine in the mathematical logic. | Support config layout, and can't config layout on the dimension of axis.                                 |
| [mindspore.ops.Sqrt](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Sqrt.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.Square](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Square.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.SquaredDifference](https://mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.SquaredDifference.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.Squeeze](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Squeeze.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.Stack](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Stack.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.StridedSlice](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.StridedSlice.html) | Only support mask with all 0 values; The dimension needs to be split should be all extracted; Split is supported when the strides of dimension is 1. | Not support config layout                                 |
| [mindspore.ops.Slice](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Slice.html) | The dimension needs to be split should be all extracted.     | Not support config layout                                 |
| [mindspore.ops.Sub](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Sub.html) | None                                                         | Layout configuration is supported. The input layout should be the same or broadcastable. The output layout cannot be configured.                                     |
| [mindspore.ops.Tan](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Tan.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.Tanh](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Tanh.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.Tile](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Tile.html) | Only support configuring shard strategy for multiples.       | Not support config layout                                 |
| [mindspore.ops.TopK](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.TopK.html) | The input_x can't be split into the last dimension, otherwise it's inconsistent with the single machine in the mathematical logic. | Not support config layout                                 |
| [mindspore.ops.Transpose](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Transpose.html) | None                                                         | Support config layout, and the output layout cannot be configured.                                |
| [mindspore.ops.TruncateDiv](https://mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.TruncateDiv.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.TruncateMod](https://mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.TruncateMod.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.Unique](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Unique.html) | Only support the repeat calculate shard strategy (1,).       | Not support config layout                                 |
| [mindspore.ops.UnsortedSegmentSum](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.UnsortedSegmentSum.html) | The shard of input_x and segment_ids must be the same as the dimension of segment_ids. | Not support config layout                                 |
| [mindspore.ops.UnsortedSegmentMin](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.UnsortedSegmentMin.html) | The shard of input_x and segment_ids must be the same as the dimension of segment_ids. Note that if the segment id i is missing, then the output[i] will be filled with the maximum of the input type. The user needs to mask the maximum value to avoid value overflow. The communication operation such as AllReudce will raise an Run Task Error due to overflow. | Not support config layout                                 |
| [mindspore.ops.UnsortedSegmentMax](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.UnsortedSegmentMax.html) | The shard of input_x and segment_ids must be the same as the dimension of segment_ids. Note that if the segment id i is missing, then the output[i] will be filled with the minimum of the input type. The user needs to mask the minimum value to avoid value overflow. The communication operation such as AllReudce will raise an Run Task Error due to overflow. | Not support config layout                                 |
| [mindspore.ops.Xdivy](https://mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Xdivy.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.Xlogy](https://mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.Xlogy.html) | None                                                         | Not support config layout                                 |
| [mindspore.ops.ZerosLike](https://www.mindspore.cn/docs/en/r2.4.1/api_python/ops/mindspore.ops.ZerosLike.html) | None                                                         | Not support config layout                                 |

> Repeated calculation means that the device is not fully used. For example, the cluster has 8 devices to run distributed training, the splitting strategy only cuts the input into 4 copies. In this case, double counting will occur.