MindSpore Lite

Obtain MindSpore Lite

  • Downloading MindSpore Lite
  • Building Device-side MindSpore Lite
  • Building Cloud-side MindSpore Lite

Quick Start

  • Quick Start to Device-side Inference
  • Quick Start to Cloud-side Inference

Device-side Inference

  • Device-side Inference Sample
  • Post Training Quantization
  • Data Preprocessing
  • Executing Model Inference
  • Performing Inference or Training on MCU or Small Systems
  • Application Specific Integrated Circuit Integration Instructions

Device-side Training

  • Device-side Training Sample
  • Executing Model Training

Third-party hardware docking

  • Custom Kernel
  • Using Delegate to Support Third-party AI Framework

Device-side Tools

  • Converting Models for Inference
  • Benchmark Tool
  • Static Library Cropper Tool
  • Visualization Tool
  • Model Obfuscation Tool

Cloud-side Inference

  • Performing Inference
  • Performing Concurrent Inference
  • Distributed Inference

Cloud-side Tools

  • Model Converter
    • Offline Conversion of Inference Models
    • Using Python Interface to Perform Model Conversions
    • Ascend Conversion Tool Description
    • Graph Kernel Fusion Configuration Instructions (Beta Feature)
  • Benchmark Tool

References

  • Overall Architecture (Lite)
  • Lite Operator List
  • Codegen Operator List
  • Model List
  • Troubleshooting
  • Log

RELEASE NOTES

  • Release Notes
MindSpore Lite
  • »
  • Model Converter
  • View page source

Model Converter

View Source on Gitee
  • Offline Conversion of Inference Models
  • Using Python Interface to Perform Model Conversions
  • Ascend Conversion Tool Description
  • Graph Kernel Fusion Configuration Instructions (Beta Feature)
Previous Next

© Copyright MindSpore.

Built with Sphinx using a theme provided by Read the Docs.