Missing API Processing Policy
You can use the following methods to process the missing API:
1. Use equivalent replacement
In some scenarios, API functions can be equivalently replaced. For example:
As Squeeze, Flatten, and ExpandDims do not perform actual calculation, APIs with only Tensor shape changed can be replaced by Reshape.
When the output shape of AdaptiveAvgPool and AdaptiveMaxPool is 1, AdaptiveAvgPool and AdaptiveMaxPool are equivalent to ReduceMean and ReduceMax when
keep_dims
is set toTrue
.MaxPool and MaxPoolWithArgmax are equivalent when indices are not used.
Sort is equivalent to TopK in the full sorting scenario.
2. Use existing APIs to package equivalent function logic
For some missing APIs, equivalent functions can be implemented based on existing MindSpore APIs. The following is an example of sigmoid focal loss
:
First, let’s analyze the algorithm basis of the API.
Focal Loss[1] is a method used to deal with the imbalance of positive and negative references and difficult references during the training of a single-phase target detector.
Generally, the sigmoid focal loss API is implemented by MMDetection. The following shows how PyTorch implements this API.
According to the API mapping table, the APIs used in the code have corresponding implementations on MindSpore.
Implement the MindSpore version by referring to the preceding PyTorch code.
PyTorch | MindSpore |
|
|
Then, perform a test.
import torch
import numpy as np
np.random.seed(1)
def test_compare(pred, target, weight, gamma=2.0, alpha=0.25, reduction='mean', avg_factor=None):
ms_s_focal_loss = SigmoidFoaclLoss(weight=weight, gamma=gamma, alpha=alpha,
reduction=reduction, avg_factor=avg_factor)
loss_ms = ms_s_focal_loss(ms.Tensor(pred), ms.Tensor(target))
loss_pt = py_sigmoid_focal_loss(torch.from_numpy(pred), torch.from_numpy(target), weight=torch.from_numpy(weight),
gamma=gamma, alpha=alpha, reduction=reduction, avg_factor=avg_factor)
print(np.max(np.abs(loss_ms.asnumpy() - loss_pt.numpy())))
pred = np.random.uniform(-1, 1, (3, 4)).astype(np.float32)
target = np.random.uniform(-1, 1, (3, 4)).astype(np.float32)
weight = np.random.uniform(0, 1, (3,)).astype(np.float32)
test_compare(pred, target, weight, gamma=2.0, alpha=0.25, reduction='mean', avg_factor=None)
test_compare(pred, target, weight, gamma=1.0, alpha=0.5, reduction='sum', avg_factor=None)
test_compare(pred, target, weight, gamma=2.0, alpha=0.25, reduction='mean', avg_factor=0.3)
test_compare(pred, target, weight, gamma=2.0, alpha=0.25, reduction='none', avg_factor=None)
The final error is less than 1e-5, which is a reasonable accuracy error.
6.891787e-08
1.4305115e-06
2.8014183e-06
3.799796e-07
3. Customize operators
When existing APIs cannot be used for packaging, or the performance of cell encapsulation is poor, you need to customize operators. For details, see Custom Operators.
In addition to migrating APIs, you can also use the aot
development mode of the Custom
operator to call the PyTorch Aten operator for quick verification. For details, see Using Third-party Operator Libraries Based on Customized Interfaces.
Note that it is convenient to migrate operators implemented by PyTorch to the GPU and CPU. Most of the operators displayed here are GPU and CPU operators. Ascend operators need to use the TBE for operator development, which has high requirements. Therefore, you are advised to use officially implemented operators for packaging.
4. Seek help from the community
Commit an issue on MindSpore Gitee to suggest developing missing APIs.
[1] Lin, T. Y. , et al. “Focal Loss for Dense Object Detection.” IEEE Transactions on Pattern Analysis & Machine Intelligence PP.99(2017):2999-3007.