MindSpore 2.4 Officially Released
Native Support for Supernodes and Upgraded Distributed Parallel Capabilities
Knowledge Map
Mastering MindSpore: From Beginner to Advanced
Tutorials
From Quickstart to Experts
MindSpore Activities
Community Communication and Sharing
MindSpore
Native Distributed Model Training
Provides multiple parallel capabilities for foundation model training, as well as easy-to-use APIs for configuring distributed policies for foundation models, helping developers quickly implement distributed training of high-performance foundation models.
AI4S Converged Computing Framework
Supports AI+HPC full-process programmability and microprogrammable functions, meeting the requirements for flexible programming and heterogeneous parallel acceleration in the AI for Science (AI4S) scenario.
Fully Unleashing Hardware Potential
Provides unified dynamic and static programming, best matches Ascend AI Processor, and maximizes hardware capabilities, helping developers shorten training time and improve inference performance.
Quick Deployment in All Scenarios
Supports quick deployment on the cloud, edge, and mobile devices, enabling excellent resource utilization and privacy protection and allowing developers to focus on the creation of AI apps.
Events & News
Install
Start Learning
Repositories
Ecosystem
Provides open source AI research projects, case collections, and task-specific SOTA models and derivatives Go to view all projects