mindspore::lite
Allocator
#include <context.h>
Allocator defines a memory pool for dynamic memory malloc and memory free.
Context
#include <context.h>
Context is defined for holding environment variables during runtime.
Constructors & Destructors
Context
Context()
Constructor of MindSpore Lite Context using default value for parameters.
~Context
~Context()
Destructor of MindSpore Lite Context.
Public Attributes
vendor_name_
vendor_name_
A string value. Describes the vendor information. This attribute is used to distinguish from different vendors.
thread_num_
thread_num_
An int value. Defaults to 2. Thread number config for thread pool.
allocator
allocator
A pointer pointing to Allocator.
device_list_
device_list_
A DeviceContextVector contains DeviceContext variables.
CPU, GPU and NPU are supported now. If GPU device context is set and GPU is supported in the current device, use GPU device first, otherwise use CPU device first. If NPU device context is set and GPU is supported in the current device, use NPU device first, otherwise use CPU device first.
Model
#include <model.h>
Model defines model in MindSpore Lite for managing graph.
Destructors
~Model
virtual ~Model()
Destructor of MindSpore Lite Model.
Public Member Functions
Free
void Free()
Free MetaGraph in MindSpore Lite Model to reduce memory usage during inference.
Destroy
void Destroy()
Free all temporary memory in MindSpore Lite Model.
Static Public Member Functions
Import
static Model *Import(const char *model_buf, size_t size)
Static method to create a Model pointer.
Parameters
model_buf
: Defines the buffer read from a model file.size
: variable. Defines the byte number of model buffer.
Returns
Pointer that points to the MindSpore Lite Model.
CpuBindMode
#include <context.h>
An enum type. CpuBindMode is defined for holding arguments of the bind CPU strategy.
Public Attributes
MID_CPU
MID_CPU = 2
Bind middle cpu first.
HIGHER_CPU
HIGHER_CPU = 1
Bind higher cpu first.
NO_BIND
NO_BIND = 0
No bind.
DeviceType
#include <context.h>
An enum type. DeviceType is defined for holding user’s preferred backend.
Public Attributes
DT_CPU
DT_CPU = 0
CPU device type.
DT_GPU
DT_GPU = 1
GPU device type.
DT_NPU
DT_NPU = 2
NPU device type.
Version
#include <version.h>
std::string Version()
Global method to get a version string.
Returns
The version string of MindSpore Lite.
StringsToMSTensor
int StringsToMSTensor(const std::vector<std::string> &inputs, tensor::MSTensor *tensor)
Global method to store strings into MSTensor.
Returns
STATUS, STATUS is defined in errorcode.h.
MSTensorToStrings
std::vector<std::string> MSTensorToStrings(const tensor::MSTensor *tensor)
Global method to get strings from MSTensor.
Returns
The vector of strings.
DeviceContextVector
#include <context.h>
A vector contains DeviceContext variable.
DeviceContext
#include <context.h>
DeviceContext defines different device contexts.
Public Attributes
device_type_
device_type_
An enum type. Defaults to DT_CPU. DeviceType is defined for holding user’s CPU backend.
device_info_
device_info_
An strucr value, contains CpuDeviceInfo, GpuDeviceInfo and NpuDeviceInfo.
DeviceInfo
#include <context.h>
An struct value. DeviceInfo is defined for backend’s configuration information.
Public Attributes
cpu_device_info_
cpu_device_info_
CpuDeviceInfo is defined for CPU’s configuration information.
gpu_device_info_
gpu_device_info_
GpuDeviceInfo is defined for GPU’s configuration information.
npu_device_info_
GpuDeviceInfo is defined for NPU’s configuration information.
CpuDeviceInfo
#include <context.h>
CpuDeviceInfo is defined for CPU’s configuration information.
Public Attributes
enable_float16_
enable_float16_
A bool value. Defaults to false. This attribute enables to perform the GPU float16 inference.
Enabling float16 inference may cause low precision inference,because some variables may exceed the range of float16 during forwarding.
cpu_bind_mode_
cpu_bind_mode_
A CpuBindMode enum variable. Defaults to MID_CPU.
GpuDeviceInfo
#include <context.h>
GpuDeviceInfo is defined for GPU’s configuration information.
Public Attributes
enable_float16_
enable_float16_
A bool value. Defaults to false. This attribute enables to perform the GPU float16 inference.
Enabling float16 inference may cause low inference precision, because some variables may exceed the range of float16 during forwarding.
NpuDeviceInfo
#include <context.h>
NpuDeviceInfo is defined for NPU’s configuration information.
frequency_
A int value. Defaults to 3. This attribute is used to set the NPU frequency, which can be set to 1 (low power consumption), 2 (balanced), 3 (high performance), 4 (extreme performance).
TrainModel
#include <model.h>
Inherited from Model, TrainModel defines a class that allows to import and export the MindSpore trainable model.
Constructors & Destructors
~TrainModel
virtual ~TrainModel();
Class destructor, free all memory.
Public Member Functions
Import
static TrainModel *Import(const char *model_buf, size_t size);
Static method to create a TrainModel object.
Parameters
model_buf
: A buffer that was read from a MS model file.size
: Length of the buffer.
Returns
Pointer to MindSpore Lite TrainModel.
Free
void Free() override;
Free meta graph related data.
ExportBuf
char *ExportBuf(char *buf, size_t *len) const;
Export Model into a buffer.
Parameters
buf
: The buffer to be exported into. If it is equal to nullptr,buf
will be allocated.len
: Size of the pre-allocated buffer and the returned size of the exported buffer.
Returns
Pointer to buffer with exported model.
Public Attributes
buf_size_
size_t buf_size_;
The length of the buffer with exported model.