PyTorch and MindSpore API Mapping Table
Mapping between PyTorch APIs and MindSpore APIs, which is provided by the community. There may be differences in parameters, inputs, outputs, logic functions, and specific scenarios. For details, see the description of each API or the difference comparison provided.
More MindSpore developers are also welcome to participate in improving the mapping content. For more information on the differences in framework mechanisms of PyTorch and MindSpore, see: Optimizer Comparison, Random Number Strategy Comparison, and Parameter Initialization Comparison.
API Mapping Consistency Criteria and Exceptions
API mapping consistency criteria: consistent API function, consistent number or sequence of parameters, consistent parameter data type, consistent default value, consistent parameter name. Satisfying all the consistency conditions at the same time is considered as consistent API mapping.
The API mapping is also consistent in the following exception scenarios:
Exception Scenario 1: Compared to the API mapping consistency criteria, only the input data types of API parameters are not supported in the same range, including the following 3 sub-scenarios:
(1) MindSpore API supports passing parameters of int, float, bool, but does not support passing parameters of small bit-width data types such as int8 or float64.
(2) MindSpore API does not support passing parameters of plural type.
Exception Scenario 2: Compared to MindSpore APIss, the extra parameters of PyTorch API are general difference parameters. General difference parameters exist because PyTorch has some parameters that are added for non-functionality such as performance optimization, and the performance optimization mechanism of MindSpore is different from that of PyTorch.
Exception Scenario 3: If it can be guaranteed that MindSpore API uses the default configuration (or that the user does not configure it), MindSpore API can implement the same functionality as the PyTorch API, and MindSpore API has more parameters than PyTorch API. The functionality is not considered a difference.
Exception Scenario 4: MindSpore sets the default value of the parameters related to the PyTorch overloading mechanism in the API to None, and the corresponding parameters of the PyTorch counterpart API have no default value.
The following is an example of the exception scenario 4. In PyTorch 2.1, torch.argmax has two API overloads in the form of torch.argmax(input) and torch.argmax(input, dim, keepdim=False), respectively, where torch.argmax(input) returns the index of the maximum value element in the input Tensor, and torch.argmax(input, dim, keepdim=False) returns the index of the maximum value of the input Tensor on the specified axis.
mindspore.mint.argmax has only one API form, namely mindspore.mint.argmax(input, dim=None, keepdim=False), but mindspore.mint.argmax(input) and torch.argmax(input) have the same function. mindspore.mint.argmax(input, dim, keepdim) and torch.argmax(input, dim, keepdim)have the same function. Compared to torch.argmax, the default value of mindspore.ops.argmax parameter dim is set to None only to adapt the two API overload forms of torch.argmax, so the exception scenario 4 is also considered to be consistent API mapping.
General Difference Parameter Table
Because of the framework mechanism, MindSpore does not provide the following parameters for PyTorch:
Parameter Names |
Functions |
Descriptions |
---|---|---|
out |
Indicates the output Tensor |
Assign the operation result to the out parameter, not supported in MindSpore. |
layout |
Indicates the memory distribution strategy |
PyTorch supports torch.striped and torch.split_coo, not supported in MindSpore. |
device |
Indicates the Tensor storage location |
Including device type and optional device number, MindSpore currently supports operator or network-level device scheduling. |
requires_grad |
Indicates whether to update the gradient |
MindSpore can be accessed through the |
pin_memory |
Indicates whether to use locking page memory |
Not supported in MindSpore. |
memory_format |
Indicates the memory format of the Tensor |
Not supported in MindSpore. |
stable |
Indicates whether the sorting is stable |
Generally used in the API of Sorting algorithm, not supported in MindSpore. |
sparse_grad |
Indicates whether to perform sparsification on the gradient |
Not supported in MindSpore. |
size_average |
The deprecated parameter in PyTorch |
The |
reduce |
The deprecated parameter in PyTorch |
The |
torch
PyTorch 2.1 APIs |
MindSpore APIs |
Descriptions |
---|---|---|
The functions are consistent, but the default value of dim is different. |
||
The functions are consistent, but the default value of dim is different. |
||
The functions are consistent, but the default value of end is different. |
||
Consistent functions, inconsistent parameter names. |
||
The functions are consistent, but the default value of dim is different. |
||
The functions are consistent, but the default value of indexing is different. |
||
The functions are consistent, but the default value of end is different. |
||
The parameters of interface overloading are different. |
||
The functions are consistent, but the default value of low is different. |
||
Consistent functions, PyTorch involves overloading. |
||
The functions are consistent, but the default value of side is different. |
||
The functions are consistent, but the default value of dim is different. |
||
torch.linalg
PyTorch 2.1 APIs |
MindSpore APIs |
说明 |
---|---|---|
torch.distributed
PyTorch 2.1 APIs |
MindSpore APIs |
Descriptions |
---|---|---|
mindspore.mint.distributed.distributed.broadcast_object_list |
||
Unique to MindSpore |
||
Consistent functions, inconsistent parameter names. |
||
Consistent functions, MindSpore has an additional parameter group_desc = None. |
||
The functions are consistent, but the default value of scatter_list is different. |
||
torch.nn
PyTorch 2.1 APIs |
MindSpore APIs |
Descriptions |
---|---|---|
Consistent functions, MindSpore has no parameter inplace. |
||
Consistent functions, MindSpore has no parameter inplace. |
||
Consistent function, MindSpore has two different parameters: weight_init = None and bias_init = None. |
||
torch.nn.functional
PyTorch 2.1 APIs |
MindSpore APIs |
Descriptions |
---|---|---|
mindspore.mint.nn.functional.binary_cross_entropy_with_logits |
||
Consistent functions, MindSpore does not contain the parameter inplace. |
||
Consistent functions, MindSpore does not contain the parameter inplace. |
||
Consistent functions, MindSpore does not contain the parameter sparse. |
||
Consistent functions, but the default value of align_corners is different. |
||
Consistent functions, MindSpore has no parameter inplace. |
||
Consistent functions, MindSpore has no parameter inplace. |
||
Consistent functions, MindSpore has no parameter antialias. |
||
Consistent](https://www.mindspore.cn/docs/en/master/note/api_mapping/pytorch_api_mapping.html#api-mapping-consistency-criteria-and-exceptions) |
||
Consistent functions, MindSpore has no parameter inplace. |
||
Consistent functions, MindSpore has no parameter inplace. |
||
Consistent functions, MindSpore does not contain parameter inplace. |
||
Consistent functions, MindSpore does not contain parameter inplace. |
||
torch.special
PyTorch 2.1 APIs |
MindSpore APIs |
说明 |
---|---|---|
torch.Tensor
PyTorch 2.1 APIs |
MindSpore APIs |
Descriptions |
---|---|---|
torch.optim
PyTorch 2.1 APIs |
MindSpore APIs |
Descriptions |
---|---|---|
The functions are consistent, but PyTorch has some optimization parameters |
||
The functions are consistent, but PyTorch has some optimization parameters |
torch.utils
PyTorch 1.8.1 APIs |
MindSpore APIs |
Descriptions |
---|---|---|
torchaudio
The current API mapping table corresponds to PyTorch version 1.8.1, and Python version is no higher than Python 3.9.
TorchAudio 0.8.1 APIs |
MindSpore APIs |
Descriptions |
---|---|---|
Consistent |
||
The functions are consistent, but the parameter names are inconsistent. |
||
Consistent |
||
Consistent |
||
Consistent |
||
Consistent |
||
Consistent |
||
Consistent |
||
Consistent |
||
Consistent |
torchtext
The current API mapping table corresponds to PyTorch version 1.8.1, and Python version is no higher than Python 3.9.
TorchText 0.9.1 APIs |
MindSpore APIs |
Descriptions |
---|---|---|
torchvision
The current API mapping table corresponds to PyTorch version 1.8.1, and Python version is no higher than Python 3.9.
TorchVision 0.9.1 APIs |
MindSpore APIs |
Descriptions |
---|---|---|
Consistent |
||
Consistent |
||
Consistent |
||
Consistent |
||
Consistent |
||
Consistent |
||
Consistent |
||
The functions are consistent, but the parameter names are inconsistent. |
||
The functions are consistent, but the parameter names are inconsistent. |
||
Consistent |
||
The functions are consistent, but the parameter names are inconsistent. |
||
The functions are consistent, but the parameter names are inconsistent. |
||
The functions are consistent, but the parameter names are inconsistent. |
||
Consistent |
||
The functions are consistent, but the parameter names are inconsistent. |
||
Consistent |
||
The functions are consistent, but the parameter names are inconsistent. |
||