Usage Constraints During Operator Parallel
API name |
constraints |
---|---|
None |
|
None |
|
None |
|
None |
|
None |
|
None |
|
When the input_x is splited on the axis dimension, the distributed result may be inconsistent with that on the single machine. |
|
When the input_x is splited on the axis dimension, the distributed result may be inconsistent with that on the single machine. |
|
None |
|
None |
|
None |
|
None |
|
None |
|
None |
|
None |
|
None |
|
1. The data format only supports ‘NCHW’; |
|
1. The data format only supports ‘NCDHW’; |
|
|
|
It does not support GPU. |
|
None |
|
None |
|
None |
|
None |
|
None |
|
None |
|
1. The first dimension of input (anchor_box) and input (groundtruth_box) can be split; |
|
None |
|
The shard strategy is ignored in the Auto Parallel and Semi Auto Parallel mode. |
|
1. The strategy for ‘B’ dimension must be the same; |
|
None |
|
The input_x can’t be split into the dimension of axis, otherwise it’s inconsistent with the single machine in the mathematical logic. |
|
1. The data format only supports ‘NCHW’; |
|
1. The data format only supports ‘NCDHW’; |
|
None |
|
None |
|
1. Sharding of the H/W dimension of input (x) and the second dimension of input (boxes) is not supported. |
|
The |
|
The same as CumProd. |
|
None |
|
None |
|
None |
|
None |
|
The same as Gather. |
|
None |
|
None |
|
None |
|
None |
|
None |
|
None |
|
None |
|
None |
|
None |
|
None |
|
1. Set the strategy for |
|
1. Uniform split: |
|
The dimension corresponding to dim cannot be segmented; In auto_parallel mode, the dual recursive algorithm is not supported. |
|
The first input can’t be split, and the last dimension of the second input can’t be split; In auto_parallel mode, the dual recursive algorithm is not supported. |
|
None |
|
None |
|
None |
|
None |
|
None |
|
The first dimension of |
|
The same as InplaceAdd. |
|
The same as InplaceAdd. |
|
None |
|
The first dimension of the |
|
None |
|
None |
|
None |
|
The input_x can’t be split into the dimension of axis, otherwise it’s inconsistent with the single machine in the mathematical logic. |
|
None |
|
None |
|
None |
|
You don’t need to configure strategy for |
|
None |
|
None |
|
None |
|
None |
|
None |
|
The logits can’t be split into the dimension of axis, otherwise it’s inconsistent with the single machine in the mathematical logic. |
|
None |
|
1. |
|
None |
|
1. The data format only supports ‘NCHW’; |
|
The same as AvgPool3D. |
|
None |
|
None |
|
None |
|
None |
|
None |
|
None |
|
None |
|
Only support 1-dim indices. Must configure strategy for the output and the first and second inputs. |
|
None |
|
None |
|
When the shape of weight is not [1], the shard strategy in channel dimension of input_x should be consistent with weight. |
|
Only the all-1 strategy is supported. |
|
None |
|
None |
|
When the input_x is splited on the axis dimension, the distributed result may be inconsistent with that on the single machine. |
|
When the input_x is splited on the axis dimension, the distributed result may be inconsistent with that on the single machine. |
|
None |
|
None |
|
None |
|
None |
|
Configuring shard strategy is not supported. In auto parallel mode, if multiple operators are followed by the reshape operator, different shard strategys are not allowed to be configured for these operators. |
|
Under GPU platform, can not split H or W dimension; Under Ascend platform, can not split H dimension, and the output shape of W dimension can be divided by the strategy. |
|
None |
|
When |
|
Sharding the H/W dimension of the input(features) and the second dimension of input(rois) is not supported. |
|
None |
|
None |
|
The second input cannot be split, and the top n dimensions of the third input (n is the dimension of the second input) cannot be split; In auto_parallel mode, the dual recursive algorithm is not supported. |
|
The second input cannot be split, and the top n dimensions of the third input (n is the dimension of the second input) cannot be split; In auto_parallel mode, the dual recursive algorithm is not supported. |
|
The first dimension of the first input cannot be split, the second input cannot be split, and the top n dimensions of the third input (n is the dimension of the second input) cannot be split; In auto_parallel mode, the dual recursive algorithm is not supported. |
|
The first dimension of the first input cannot be split, the second input cannot be split, and the top n dimensions of the third input (n is the dimension of the second input) cannot be split; In auto_parallel mode, the dual recursive algorithm is not supported. |
|
The second input cannot be split, and the top n dimensions of the third input (n is the dimension of the second input) cannot be split; In auto_parallel mode, the dual recursive algorithm is not supported. |
|
The second input cannot be split, the top n-1 dimension of the third input (n is the dimension of the second input) cannot be split, and the remaining k dimensions (excluding the top n-1 dimension) of the third input are consistent with the last k partitions of the first input; In auto_parallel mode, the dual recursive algorithm is not supported. |
|
The second input cannot be split, the top n-1 dimension of the third input (n is the dimension of the second input) cannot be split, and the remaining k dimensions (excluding the top n-1 dimension) of the third input are consistent with the last k partitions of the first input; In auto_parallel mode, the dual recursive algorithm is not supported. |
|
The top m dimension of the first input cannot be cut (m is the value of the last dimension of the second input indexes [- 1]). The second input cannot be split. The top n-1 dimension of the third input (n is the dimension of the second input) cannot be split. The partitions of the remaining k dimensions (excluding the top n-1 dimension) of the third input is consistent with the last k partitions of the first input; In auto_parallel mode, the dual recursive algorithm is not supported. |
|
The second input cannot be split, and the top n dimensions of the third input (n is the dimension of the second input) cannot be split; In auto_parallel mode, the dual recursive algorithm is not supported. |
|
The first dimension of first input can not be split, the second input can not be split, and the first n dimensions (n is the dimension size of the second input) of the third input can not be split; In auto_parallel mode, the dual recursive algorithm is not supported. |
|
The second input cannot be split, the top n-1 dimension of the third input (n is the dimension of the second input) cannot be split, and the remaining k dimensions (excluding the top n-1 dimension) of the third input are consistent with the last k partitions of the first input; In auto_parallel mode, the dual recursive algorithm is not supported. |
|
The second input cannot be split, the top n-1 dimension of the third input (n is the dimension of the second input) cannot be split, and the remaining k dimensions (excluding the top n-1 dimension) of the third input are consistent with the last k partitions of the first input; In auto_parallel mode, the dual recursive algorithm is not supported. |
|
The top m dimension of the first input cannot be cut (m is the value of the last dimension of the second input indexes [- 1]). The second input cannot be split. The top n-1 dimension of the third input (n is the dimension of the second input) cannot be split. The partitions of the remaining k dimensions (excluding the top n-1 dimension) of the third input is consistent with the last k partitions of the first input; In auto_parallel mode, the dual recursive algorithm is not supported. |
|
The top m dimension of the first input cannot be cut (m is the value of the last dimension of the second input indexes [- 1]). The second input cannot be split. The top n-1 dimension of the third input (n is the dimension of the second input) cannot be split. The partitions of the remaining k dimensions (excluding the top n-1 dimension) of the third input is consistent with the last k partitions of the first input; In auto_parallel mode, the dual recursive algorithm is not supported. |
|
The second input cannot be split, the top n-1 dimension of the third input (n is the dimension of the second input) cannot be split, and the remaining k dimensions (excluding the top n-1 dimension) of the third input are consistent with the last k partitions of the first input; In auto_parallel mode, the dual recursive algorithm is not supported. |
|
The second input cannot be split, the top n-1 dimension of the third input (n is the dimension of the second input) cannot be split, and the remaining k dimensions (excluding the top n-1 dimension) of the third input are consistent with the last k partitions of the first input; In auto_parallel mode, the dual recursive algorithm is not supported. |
|
The top m dimension of the first input cannot be cut (m is the value of the last dimension of the second input indexes [- 1]). The second input cannot be split. The top n-1 dimension of the third input (n is the dimension of the second input) cannot be split. The partitions of the remaining k dimensions (excluding the top n-1 dimension) of the third input is consistent with the last k partitions of the first input; In auto_parallel mode, the dual recursive algorithm is not supported. |
|
In auto_parallel mode, the dual recursive algorithm is not supported. |
|
None |
|
None |
|
None |
|
None |
|
None |
|
None |
|
The logits can’t be split into the dimension of axis, otherwise it’s inconsistent with the single machine in the mathematical logic. |
|
The last dimension of logits and labels can’t be splited; Only supports using output[0]. |
|
None |
|
None |
|
None |
|
The same as Gather. |
|
The input_x can’t be split into the dimension of axis, otherwise it’s inconsistent with the single machine in the mathematical logic. |
|
None |
|
None |
|
None |
|
None |
|
None |
|
Only support mask with all 0 values; The dimension needs to be split should be all extracted; Split is supported when the strides of dimension is 1. |
|
The dimension needs to be split should be all extracted. |
|
None |
|
None |
|
None |
|
Only support configuring shard strategy for multiples. |
|
The input_x can’t be split into the last dimension, otherwise it’s inconsistent with the single machine in the mathematical logic. |
|
None |
|
None |
|
None |
|
Only support the repeat calculate shard strategy (1,). |
|
The shard of input_x and segment_ids must be the same as the dimension of segment_ids. |
|
The shard of input_x and segment_ids must be the same as the dimension of segment_ids. Note that if the segment id i is missing, then the output[i] will be filled with the maximum of the input type. The user needs to mask the maximum value to avoid value overflow. The communication operation such as AllReudce will raise an Run Task Error due to overflow. |
|
The shard of input_x and segment_ids must be the same as the dimension of segment_ids. Note that if the segment id i is missing, then the output[i] will be filled with the minimum of the input type. The user needs to mask the minimum value to avoid value overflow. The communication operation such as AllReudce will raise an Run Task Error due to overflow. |
|
None |
|
None |
|
None |
Repeated calculation means that the device is not fully used. For example, the cluster has 8 devices to run distributed training, the splitting strategy only cuts the input into 4 copies. In this case, double counting will occur.