mindspore
Context
#include <context.h>
The Context class is used to store environment variables during execution.
Public Member Functions
SetThreadNum
void SetThreadNum(int32_t thread_num);
Set the number of threads at runtime. This option is only valid for MindSpore Lite.
Parameters
thread_num
: the number of threads at runtime.
GetThreadNum
int32_t GetThreadNum() const;
Get the current thread number setting.
Returns
The current thread number setting.
SetAllocator
void SetAllocator(const std::shared_ptr<Allocator> &allocator);
Set Allocator, which defines a memory pool for dynamic memory malloc and memory free. This option is only valid for MindSpore Lite.
Parameters
allocator
: A pointer to an Allocator.
GetAllocator
std::shared_ptr<Allocator> GetAllocator() const;
Get the current Allocator setting.
Returns
The current Allocator setting.
MutableDeviceInfo
std::vector<std::shared_ptr<DeviceInfoContext>> &MutableDeviceInfo();
Get a mutable reference of DeviceInfoContext vector in this context. Only MindSpore Lite supports heterogeneous scenarios with multiple members in the vector.
Returns
Mutable reference of DeviceInfoContext vector in this context.
DeviceInfoContext
#include <context.h>
DeviceInfoContext defines different device contexts.
Public Member Functions
GetDeviceType
virtual enum DeviceType GetDeviceType() const = 0
Get the type of this DeviceInfoContext.
Returns
Type of this DeviceInfoContext.
enum DeviceType { kCPU = 0, kMaliGPU, kNvidiaGPU, kKirinNPU, kAscend910, kAscend310, // add new type here kInvalidDeviceType = 100, };
Cast
template <class T> std::shared_ptr<T> Cast();
A similar function to RTTI is provided when the -fno-rtti
compilation option is turned on, which converts DeviceInfoContext to a shared pointer of type T
, and returns nullptr
if the conversion fails.
Returns
A pointer of type
T
after conversion. If the conversion fails, it will benullptr
.
CPUDeviceInfo
#include <context.h>
Derived from DeviceInfoContext, The configuration of the model running on the CPU. This option is only valid for MindSpore Lite.
Public Member Functions
Functions |
Notes |
---|---|
|
Set thread affinity mode |
|
- Returns: The thread affinity mode |
|
Enables to perform the float16 inference |
|
- Returns: whether enable float16 inference. |
MaliGPUDeviceInfo
#include <context.h>
Derived from DeviceInfoContext, The configuration of the model running on the GPU. This option is only valid for MindSpore Lite.
Public Member Functions
Functions |
Notes |
---|---|
|
Enables to perform the float16 inference |
|
- Returns: whether enable float16 inference. |
KirinNPUDeviceInfo
#include <context.h>
Derived from DeviceInfoContext, The configuration of the model running on the NPU. This option is only valid for MindSpore Lite.
Public Member Functions
Functions |
Notes |
---|---|
|
Used to set the NPU frequency |
|
- Returns: NPU frequency |
NvidiaGPUDeviceInfo
#include <context.h>
Derived from DeviceInfoContext, The configuration of the model running on the GPU. This option is invalid for MindSpore Lite.
Public Member Functions
Functions |
Notes |
---|---|
|
Used to set device id |
|
- Returns: The device id. |
Ascend910DeviceInfo
#include <context.h>
Derived from DeviceInfoContext, The configuration of the model running on the Ascend910. This option is invalid for MindSpore Lite.
Public Member Functions
Functions |
Notes |
---|---|
|
Used to set device id |
|
- Returns: The device id. |
Ascend310DeviceInfo
#include <context.h>
Derived from DeviceInfoContext, The configuration of the model running on the Ascend310. This option is invalid for MindSpore Lite.
Public Member Functions
Functions |
Notes |
---|---|
|
Used to set device id |
|
- Returns: The device id. |
|
Set AIPP configuration file path |
|
- Returns: AIPP configuration file path |
|
Set format of model inputs |
|
- Returns: The set format of model inputs |
|
Set shape of model inputs |
|
- Returns: The set shape of model inputs |
|
Set type of model outputs |
|
- Returns: The set type of model outputs |
|
Set precision mode of model |
|
- Returns: The set precision mode |
|
Set op select implementation mode |
|
- Returns: The set op select implementation mode |
Serialization
#include <serialization.h>
The Serialization class is used to summarize methods for reading and writing model files.
Static Public Member Function
Load
Loads a model file from path, is not supported on MindSpore Lite.
Status Load(const std::string &file, ModelType model_type, Graph *graph, const Key &dec_key = {},
const std::string &dec_mode = kDecModeAesGcm);
Parameters
file
: the path of model file.model_type
:the Type of model file, options areModelType::kMindIR
,ModelType::kOM
.graph
:the output parameter, an object saves graph data.dec_key
: the decryption key, key length is 16, 24, or 32.dec_mode
: the decryption mode, optional options areAES-GCM
,AES-CBC
.
Returns
Status code.
Load
Load multiple models from multiple files, MindSpore Lite does not provide this feature.
Status Load(const std::vector<std::string> &files, ModelType model_type, std::vector<Graph> *graphs,
const Key &dec_key = {}, const std::string &dec_mode = kDecModeAesGcm);
Parameters
file
: the path of model file.model_type
:the Type of model file, options areModelType::kMindIR
,ModelType::kOM
.graph
:the output parameter, an object saves graph data.dec_key
: the decryption key, key length is 16, 24, or 32.dec_mode
: the decryption mode, optional options areAES-GCM
,AES-CBC
.
Returns
Status code.
Load
Loads a model file from memory buffer.
Status Load(const void *model_data, size_t data_size, ModelType model_type, Graph *graph,
const Key &dec_key = {}, const std::string &dec_mode = kDecModeAesGcm);
Parameters
model_data
:a buffer filled by model file.data_size
:the size of the buffer.model_type
:the Type of model file, options areModelType::kMindIR
,ModelType::kOM
.graph
:the output parameter, an object saves graph data.dec_key
: the decryption key, key length is 16, 24, or 32.dec_mode
: the decryption mode, optional options areAES-GCM
,AES-CBC
.
Returns
Status code.
Model
#include <model.h>
The Model class is used to define a MindSpore model, facilitating computational graph management.
Constructor and Destructor
Model();
~Model();
Public Member Function
Build
Status Build(GraphCell graph, const std::shared_ptr<Context> &model_context);
Builds a model so that it can run on a device.
Parameters
graph
:GraphCell
is a derivative ofCell
.Cell
is not available currently.GraphCell
can be constructed fromGraph
, for example,model.Build(GraphCell(graph), context)
.model_context
: a context used to store options during execution.
Returns
Status code.
Modifications to
model_context
afterBuild
will no longer take effect.
Predict
Status Predict(const std::vector<MSTensor> &inputs, std::vector<MSTensor> *outputs);
Inference model.
Parameters
inputs
: avector
where model inputs are arranged in sequence.outputs
: output parameter, which is a pointer to avector
. The model outputs are filled in the container in sequence.
Returns
Status code.
GetInputs
std::vector<MSTensor> GetInputs();
Obtains all input tensors of the model.
Returns
The vector that includes all input tensors.
GetInputByTensorName
MSTensor GetInputByTensorName(const std::string &tensor_name);
Obtains the input tensor of the model by name.
Returns
The input tensor with the given name, if the name is not found, an invalid tensor is returned.
GetOutputs
std::vector<MSTensor> GetOutputs();
Obtains all output tensors of the model.
Returns
A
vector
that includes all output tensors.
GetOutputTensorNames
std::vector<std::string> GetOutputTensorNames();
Obtains names of all output tensors of the model.
Returns
A
vector
that includes names of all output tensors.
GetOutputByTensorName
MSTensor GetOutputByTensorName(const std::string &tensor_name);
Obtains the output tensor of the model by name.
Returns
The output tensor with the given name, if the name is not found, an invalid tensor is returned.
Resize
Status Resize(const std::vector<MSTensor> &inputs, const std::vector<std::vector<int64_t>> &dims);
Resizes the shapes of inputs.
Parameters
inputs
: avector
that includes all input tensors in order.dims
: defines the new shapes of inputs, should be consistent withinputs
.
Returns
Status code.
CheckModelSupport
static bool CheckModelSupport(enum DeviceType device_type, ModelType model_type);
Checks whether the type of device supports the type of model.
Parameters
device_type
: device type,options arekMaliGPU
,kAscend910
, etc.model_type
: the Type of model file, options areModelType::kMindIR
,ModelType::kOM
.
Returns
A bool value.
MSTensor
#include <types.h>
The MSTensor class defines a tensor in MindSpore.
Constructor and Destructor
MSTensor();
explicit MSTensor(const std::shared_ptr<Impl> &impl);
MSTensor(const std::string &name, DataType type, const std::vector<int64_t> &shape, const void *data, size_t data_len);
~MSTensor();
Static Public Member Function
CreateTensor
MSTensor *CreateTensor(const std::string &name, DataType type, const std::vector<int64_t> &shape,
const void *data, size_t data_len) noexcept;
Creates a MSTensor object, whose data need to be copied before accessed by Model
, must be used in pairs with DestroyTensorPtr
.
Parameters
name
: the name of theMSTensor
.type
: the data type of theMSTensor
.shape
: the shape of theMSTensor
.data
: the data pointer that points to allocated memory.data
: the length of the memory, in bytes.
Returns
An pointer of
MStensor
.
CreateRefTensor
MSTensor *CreateRefTensor(const std::string &name, DataType type, const std::vector<int64_t> &shape, void *data,
size_t data_len) noexcept;
Creates a MSTensor object, whose data can be directly accessed by Model
, must be used in pairs with DestroyTensorPtr
.
Parameters
name
: the name of theMSTensor
.type
: the data type of theMSTensor
.shape
: the shape of theMSTensor
.data
: the data pointer that points to allocated memory.data
: the length of the memory, in bytes.
Returns
An pointer of
MStensor
.
StringsToTensor
MSTensor *StringsToTensor(const std::string &name, const std::vector<std::string> &str);
Create a string type MSTensor
object whose data can be accessed by Model
only after being copied, must be used in pair with DestroyTensorPtr
.
Parameters
name
: the name of theMSTensor
.str
:avector
container containing several strings.
Returns
An pointer of
MStensor
.
TensorToStrings
std::vector<std::string> TensorToStrings(const MSTensor &tensor);
Parse the string type MSTensor
object into strings.
Parameters
tensor
: aMSTensor
object.
Returns
A
vector
container containing several strings.
DestroyTensorPtr
void DestroyTensorPtr(MSTensor *tensor) noexcept;
Destroy an object created by Clone
, StringsToTensor
, CreateRefTensor
or CreateTensor
. Do not use it to destroy MSTensor
from other sources.
Parameters
tensor
: a pointer returned byClone
,StringsToTensor
,CreateRefTensor
orCreateTensor
.
Public Member Functions
Name
std::string Name() const;
Obtains the name of the MSTensor
.
Returns
The name of the
MSTensor
.
DataType
enum DataType DataType() const;
Obtains the data type of the MSTensor
.
Returns
The data type of the
MSTensor
.
Shape
const std::vector<int64_t> &Shape() const;
Obtains the shape of the MSTensor
.
Returns
A
vector
that contains the shape of theMSTensor
.
ElementNum
int64_t ElementNum() const;
Obtains the number of elements of the MSTensor
.
Returns
The number of elements of the
MSTensor
.
Data
std::shared_ptr<const void> Data() const;
Obtains a shared pointer to the copy of data of the MSTensor
.
Returns
A shared pointer to the copy of data of the
MSTensor
.
MutableData
void *MutableData();
Obtains the pointer to the data of the MSTensor
.
Returns
The pointer to the data of the
MSTensor
.
DataSize
size_t DataSize() const;
Obtains the length of the data of the MSTensor
, in bytes.
Returns
The length of the data of the
MSTensor
, in bytes.
IsDevice
bool IsDevice() const;
Gets the boolean value that indicates whether the memory of MSTensor
is on device.
Returns
The boolean value that indicates whether the memory of
MSTensor
is on device.
Clone
MSTensor *Clone() const;
Gets a deep copy of the MSTensor
, must be used in pair with DestroyTensorPtr
.
Returns
A pointer points to a deep copy of the
MSTensor
.
operator==(std::nullptr_t)
bool operator==(std::nullptr_t) const;
Gets the boolean value that indicates whether the MSTensor
is valid.
Returns
The boolean value that indicates whether the
MSTensor
is valid.
KernelCallBack
#include <ms_tensor.h>
using KernelCallBack = std::function<bool(std::vector<tensor::MSTensor *> inputs, std::vector<tensor::MSTensor *> outputs, const CallBackParam &opInfo)>
A function wrapper. KernelCallBack defines the pointer for callback function.
CallBackParam
#include <ms_tensor.h>
A struct. CallBackParam defines input arguments for callback function.
Public Attributes
node_name
node_name
A string variable. Node name argument.
node_type
node_type
A string variable. Node type argument.