Using MindSpore on Mobile and IoT

Filter
Operation System
User
Stage

Implementing an Image Classification Application
It is recommended that you start from the image classification demo on the Android device to understand how to build the MindSpore Lite application project, configure dependencies, and use related APIs.
Training a LeNet Model
This tutorial explains the code that trains a LeNet model using Training-on-Device infrastructure.
Downloading MindSpore Lite
This tutorial introduces how to download the MindSpore Lite quickly.
Building MindSpore Lite
This tutorial introduces how to build the MindSpore Lite quickly.
Converting Models for Inference
MindSpore Lite provides a tool for offline model conversion. It supports conversion of multiple types of models. The converted models can be used for inference.
Optimizing the Model (Quantization After Training)
Converting a trained `float32` model into an `int8` model through quantization after training can reduce the model size and improve the inference performance. This tutorial introduces how to use the function.
Preprocessing Image Data
This tutorial introduces how to process the image data before inference to meet the data format requirements for model inference by creating a LiteMat object.
Using Runtime for Model Inference (C++)
After model conversion using MindSpore Lite, the model inference process needs to be completed in Runtime. This tutorial introduces how to use C++ API to write inference code.