MindSpore Lite Documentation

Filter
Environment
User
Stage
Application Specific Integrated Circuit
Programming Language

Downloading MindSpore Lite
Welcome to MindSpore Lite. You can download the version package suitable for the local environment and use it directly.
Quick Start to Device-side Inference
This document uses a model inference example to describe how to use basic device-side MindSpore Lite functions.
Building Device-side
This chapter introduces how to quickly compile device-side MindSpore Lite.
Building Cloud-side MindSpore Lite
This chapter introduces how to quickly compile cloud-side MindSpore Lite.
Ascend Conversion Tool Description
This article introduces the related features of the cloud-side inference model conversion tool in Ascend back-end, such as profile options, dynamic shape, AOE, custom operators.
Graph Kernel Fusion Configuration Instructions (Beta Feature)
Graph kernel fusion is a unique network performance optimization technique in MindSpore. It can automatically analyze and optimize the existing network computational graph logic and combine with the target hardware capabilities to perform optimizations, such as computational simplification and substitution, operator splitting and fusion, operator special case compilation, to improve the utilization of device computational resources and achieve the overall optimization of network performance.
Experiencing C++ Simplified Inference Demo
This tutorial provides a MindSpore Lite inference demo. It demonstrates the basic on-device inference process using C++ by inputting random data, executing inference, and printing the inference result.
Experiencing Java Simplified Inference Demo
This tutorial provides an example program for MindSpore Lite to perform inference. It demonstrates the basic process of performing inference on the device side using MindSpore Lite Java interface by random inputting data, executing inference, and printing the inference result.