mindspore
Data Presentation
Tensor
Tensor is a data structure that stores an n-dimensional array. |
|
Create a new Tensor in Cell.construct() or function decorated by @jit. |
|
A sparse representation of a set of nonzero elements from a tensor at given indices. |
|
Constructs a sparse tensor in CSR (Compressed Sparse Row) format, with specified values indicated by values and row and column positions indicated by indptr and indices. |
|
A sparse representation of a set of tensor slices at given indices. |
|
A sparse representation of a set of nonzero elements from a tensor at given indices. |
Parameter
Parameter is a Tensor subclass, when they are assigned as Cell attributes they are automatically added to the list of its parameters, and will appear, e.g. |
|
Inherited from tuple, ParameterTuple is used to save multiple parameter. |
DataType
- class mindspore.dtype
Create a data type object of MindSpore.
The actual path of
dtype
is/mindspore/common/dtype.py
. Run the following command to import the package:from mindspore import dtype as mstype
Numeric Type
Currently, MindSpore supports
int
type,uint
type,float
type andcomplex
type. The following table lists the details.Definition
Description
mindspore.int8
,mindspore.byte
8-bit integer
mindspore.int16
,mindspore.short
16-bit integer
mindspore.int32
,mindspore.intc
32-bit integer
mindspore.int64
,mindspore.intp
64-bit integer
mindspore.uint8
,mindspore.ubyte
unsigned 8-bit integer
mindspore.uint16
,mindspore.ushort
unsigned 16-bit integer
mindspore.uint32
,mindspore.uintc
unsigned 32-bit integer
mindspore.uint64
,mindspore.uintp
unsigned 64-bit integer
mindspore.float16
,mindspore.half
16-bit floating-point number
mindspore.float32
,mindspore.single
32-bit floating-point number
mindspore.float64
,mindspore.double
64-bit floating-point number
mindspore.bfloat16
16-bit brain-floating-point number
mindspore.complex64
64-bit complex number
mindspore.complex128
128-bit complex number
Other Type
For other defined types, see the following table.
Type
Description
tensor
MindSpore’s
tensor
type. Data format uses NCHW. For details, see tensor.bool_
Boolean
True
orFalse
.int_
Integer scalar.
uint
Unsigned integer scalar.
float_
Floating-point scalar.
complex
Complex scalar.
number
Number, including
int_
,uint
,float_
,complex
andbool_
.list_
List constructed by
tensor
, such asList[T0,T1,...,Tn]
, where the elementTi
can be of different types.tuple_
Tuple constructed by
tensor
, such asTuple[T0,T1,...,Tn]
, where the elementTi
can be of different types.function
Function. Return in two ways, when function is not None, returns Func directly, the other returns Func(args: List[T0,T1,…,Tn], retval: T) when function is None.
type_type
Type definition of type.
type_none
No matching return type, corresponding to the
type(None)
in Python.symbolic_key
The value of a variable is used as a key of the variable in
env_type
.env_type
Used to store the gradient of the free variable of a function, where the key is the
symbolic_key
of the free variable’s node and the value is the gradient.
Convert MindSpore dtype to numpy data type. |
|
Convert MindSpore dtype to python data type. |
|
Convert python type to MindSpore type. |
|
Get the MindSpore data type, which corresponds to python type or variable. |
|
An enum for quant datatype, contains INT1 ~ INT16, UINT1 ~ UINT16. |
Context
Set context for running environment. |
|
Get context attribute value according to the input key. |
|
Set auto parallel context, only data parallel supported on CPU. |
|
Get auto parallel context attribute value according to the key. |
|
Reset auto parallel context attributes to the default values. |
|
Parallel mode options. |
|
Set parameter server training mode context. |
|
Get parameter server training mode context attribute value according to the key. |
|
Reset parameter server training mode context attributes to the default values: |
|
Set parameters in the algorithm for parallel strategy searching. |
|
Get the algorithm parameter config attributes. |
|
Reset the algorithm parameter attributes. |
|
Configure heterogeneous training detailed parameters to adjust the offload strategy. |
|
Gets the offload configuration parameters. |
Seed
Set global seed. |
|
Get global seed. |
Serialization
Get the status of asynchronous save checkpoint thread. |
|
Build strategy of every parameter in network. |
|
Convert mindir model to other format model. |
|
Export the MindSpore network into an offline model in the specified format. |
|
Load MindIR. |
|
Load checkpoint info from a specified file. |
|
Load checkpoint into net for distributed predication. |
|
load protobuf file. |
|
Load parameters into network, return parameter list that are not loaded in the network. |
|
Merge parallel strategy between all pipeline stages in pipeline parallel mode. |
|
Merge parameter slices into one parameter. |
|
Obfuscate a model of MindIR format. |
|
Parse data file generated by |
|
List of original distributed checkpoint rank index for obtaining the target checkpoint of a rank_id during the distributed checkpoint conversion. |
|
Build rank list, the checkpoint of ranks in the rank list has the same contents with the local rank who saves the group_info_file_name. |
|
Save checkpoint to a specified file. |
|
save protobuf file. |
|
Transform distributed checkpoint from source sharding strategy to destination sharding strategy by rank for a network. |
|
Transform distributed checkpoint from source sharding strategy to destination sharding strategy for a rank. |
Automatic Differentiation
A wrapper function to generate the gradient function for the input function. |
|
A wrapper function to generate the function to calculate forward output and gradient for the input function. |
|
When return_ids of |
|
Compute Jacobian via forward mode, corresponding to forward-mode differentiation. |
|
Compute Jacobian via reverse mode, corresponding to reverse-mode differentiation. |
|
Compute the jacobian-vector-product of the given network. |
|
Compute the vector-jacobian-product of the given network. |
Parallel Optimization
Automatic Vectorization
Vectorizing map (vmap) is a kind of higher-order function to map fn along the parameter axes. |
Parallel
Defining the input and output layouts of this cell and the parallel strategies of remaining ops will be generated by sharding propagation. |
JIT
Jit config for compile. |
|
Create a callable MindSpore graph from a Python function. |
|
Class decorator for user-defined classes. |
|
Class decorator for user-defined classes. |
|
Create a callable MindSpore graph from a Python function. |
|
Recycle memory used by MindSpore. |
|
Make a constant value mutable. |
|
Used to calculate constant in graph copmpiling process and improve compile performance in GRAPH_MODE. |
|
Make the cell to be reusable. |
Tool
Dataset Helper
DatasetHelper is a class to process the MindData dataset and provides the information of dataset. |
|
Connect the network with dataset in dataset_helper. |
|
A wrapper function to generate a function for the input function. |
Debugging and Tuning
This class to enable the profiling of MindSpore neural networks. |
|
SummaryCollector can help you to collect some common information, such as loss, learning late, computational graph and so on. |
|
SummaryLandscape can help you to collect loss landscape information. |
|
SummaryRecord is used to record the summary data and lineage data. |
|
Enable or disable dump for the target and its contents. |
Log
Get the logger level. |
|
Get logger configurations. |
Installation Verification
Provide a convenient API to check if the installation is successful or failed. |