Inference
Based on the model trained by MindSpore, it supports the execution of inferences on various platforms such as Ascend 910 AI processor, Ascend 310 AI processor, GPU, CPU, and device side. For more details, please refer to the following tutorials:
At the same time, MindSpore offers a lightweight and high-performance module called “MindSpore Serving”, which helps MindSpore developers effectively deploy online inferences in a production environment. For more details, please refer to the following tutorials: