MindSpore
MindSpore Inference
MindSpore Large Language Model Inference
Obtaining and Preparing Large Language Model Weights
Building a Large Language Model Inference Network from Scratch
Building a Parallel Large Language Model Network
Multi-device Model Weight Sharding
Model Export
Model Quantization
Model Performance Profiler
Custom Operators
MindSpore Lite Inference
Lite Inference Overview
MindSpore
»
Model Inference
View page source
Model Inference
MindSpore Inference
Provides “out-of-the-box” deployment of large language models and inference capabilities to achieve optimal performance based on model characteristics.
MindSpore Lite Inference
A lightweight inference engine focused on efficient inference deployment solutions for offline models and high performance inference for end-to-end devices.