Programming Forms Overview
MindSpore is an AI framework designed for “end-edge-cloud” full-scenario, providing users with interfaces for AI model development, training, and inference, and supporting the development and debugging of neural networks using native Python syntax. The program provides dynamic graphs, static graphs, dynamic and static unified programming form, so that developers can balance the development efficiency and performance.
Considering development flexibility and ease of use, MindSpore supports dynamic graph programming model. Based on the functional and nn.cell interfaces provided by MindSpore, users can flexibly assemble and build the required network, and the relevant interfaces are interpreted and executed in accordance with the shape of the python function library, and support the ability to differentiate and derive, so that the interfaces are easy to debug and develop. The related interfaces are configured to support accelerated hardware asynchronous downstream execution to achieve heterogeneous acceleration.
Meanwhile, based on the dynamic graph pattern, MindSpore provides @jit decorator optimization capability, and you can specify the function to be optimized by @jit decoration. The decorated part will be parsed as a whole, constructed into a c++ computational graph, globally analyzed, compiled and optimized, thus accelerating the overall execution performance of the decorated part, which is called static acceleration.
In addition to the dynamic graph mode, MindSpore further provides programming mode in static graph. The interfaces related to the MindSpore model construction remain unchanged, no need to add @jit decoration. MindSpore framework will perform overall compilation and parsing on all definition contents developed in construct function of nn.cell class, and construct complete static graph for network to perform whole-graph level compilation optimization and execution. This enables model-level proprietary optimization for the entire network, based on the characteristics of AI model training and inference, to obtain higher execution performance.