Class Model
Defined in File model.h
Class Documentation
-
class Model
The Model class is used to define a MindSpore model, facilitating computational graph management.
Public Functions
-
Status Build(GraphCell graph, const std::shared_ptr<Context> &model_context = nullptr, const std::shared_ptr<TrainCfg> &train_cfg = nullptr)
Builds a model.
- Parameters
graph – [in] GraphCell is a derivative of Cell. Cell is not available currently. GraphCell can be constructed from Graph, for example, model.Build(GraphCell(graph), context).
model_context – [in] A context used to store options during execution.
train_cfg – [in] A config used by training.
- Returns
Status.
-
Status BuildTransferLearning(GraphCell backbone, GraphCell head, const std::shared_ptr<Context> &context, const std::shared_ptr<TrainCfg> &train_cfg = nullptr)
Builds a Transfer Learning model where the backbone weights are fixed and the head weights are trainable.
- Parameters
backbone – [in] The static, non-learnable part of the graph
head – [in] The trainable part of the graph
context – [in] A context used to store options during execution
cfg – [in] A config used by training
- Returns
Status
-
Status Resize(const std::vector<MSTensor> &inputs, const std::vector<std::vector<int64_t>> &dims)
Resizes the shapes of inputs.
- Parameters
inputs – [in] A vector that includes all input tensors in order.
dims – [in] Defines the new shapes of inputs, should be consistent with inputs.
- Returns
Status.
-
Status UpdateWeights(const std::vector<MSTensor> &new_weights)
Change the size and or content of weight tensors.
- Parameters
new_weights – [in] a vector of tensors with new shapes and data to use in the model If data pointer is null, the data of the original tensors will be copied to the new ones
- Returns
Status.
-
Status Predict(const std::vector<MSTensor> &inputs, std::vector<MSTensor> *outputs, const MSKernelCallBack &before = nullptr, const MSKernelCallBack &after = nullptr)
Inference model.
- Parameters
inputs – [in] A vector where model inputs are arranged in sequence.
outputs – [out] Which is a pointer to a vector. The model outputs are filled in the container in sequence.
before – [in] CallBack before predict.
after – [in] CallBack after predict.
- Returns
Status.
-
Status RunStep(const MSKernelCallBack &before = nullptr, const MSKernelCallBack &after = nullptr)
Train model by step.
- Parameters
before – [in] CallBack before predict.
after – [in] CallBack after predict.
- Returns
Status.
-
Status PredictWithPreprocess(const std::vector<std::vector<MSTensor>> &inputs, std::vector<MSTensor> *outputs, const MSKernelCallBack &before = nullptr, const MSKernelCallBack &after = nullptr)
Inference model with preprocess in model.
- Parameters
inputs – [in] A vector where model inputs are arranged in sequence.
outputs – [out] Which is a pointer to a vector. The model outputs are filled in the container in sequence.
whether – [in] to use data preprocess in model.
before – [in] CallBack before predict.
after – [in] CallBack after predict.
- Returns
Status.
-
Status Preprocess(const std::vector<std::vector<MSTensor>> &inputs, std::vector<MSTensor> *outputs)
Apply data preprocess if it exits in model.
- Parameters
inputs – [in] A vector where model inputs are arranged in sequence.
outputs – [out] Which is a pointer to a vector. The model outputs are filled in the container in sequence.
- Returns
Status.
-
bool HasPreprocess()
Check if data preprocess exists in model.
- Returns
true if data preprocess exists.
-
inline Status LoadConfig(const std::string &config_path)
Load config file.
- Parameters
config_path – [in] config file path.
- Returns
Status.
-
inline Status UpdateConfig(const std::string §ion, const std::pair<std::string, std::string> &config)
Update config.
- Parameters
section – [in] define the config section.
config – [in] define the config will be updated.
- Returns
Status.
-
std::vector<MSTensor> GetInputs()
Obtains all input tensors of the model.
- Returns
The vector that includes all input tensors.
-
inline MSTensor GetInputByTensorName(const std::string &tensor_name)
Obtains the input tensor of the model by name.
- Returns
The input tensor with the given name, if the name is not found, an invalid tensor is returned.
-
std::vector<MSTensor> GetGradients() const
Obtains all gradient tensors of the model.
- Returns
The vector that includes all gradient tensors.
-
Status ApplyGradients(const std::vector<MSTensor> &gradients)
update gradient tensors of the model.
- Parameters
inputs – [in] A vector new gradients.
- Returns
Status of operation
-
std::vector<MSTensor> GetFeatureMaps() const
Obtains all weights tensors of the model.
- Returns
The vector that includes all gradient tensors.
-
Status UpdateFeatureMaps(const std::vector<MSTensor> &new_weights)
update weights tensors of the model.
- Parameters
inputs – [in] A vector new weights.
- Returns
Status of operation
-
std::vector<MSTensor> GetOptimizerParams() const
Obtains optimizer params tensors of the model.
- Returns
The vector that includes all params tensors.
-
Status SetOptimizerParams(const std::vector<MSTensor> ¶ms)
update the optimizer parameters
- Parameters
inputs – [in] A vector new optimizer params.
- Returns
Status of operation
-
Status SetupVirtualBatch(int virtual_batch_multiplier, float lr = -1.0f, float momentum = -1.0f)
Setup training with virtual batches.
- Parameters
virtual_batch_multiplier – [in] - virtual batch multiplier, use any number < 1 to disable
lr – [in] - learning rate to use for virtual batch, -1 for internal configuration
momentum – [in] - batch norm momentum to use for virtual batch, -1 for internal configuration
- Returns
Status of operation
-
Status SetLearningRate(float learning_rate)
Sets the Learning Rate of the training.
- Parameters
learning_rate – [in] to set
- Returns
Status of operation
-
float GetLearningRate()
Gets the Learning Rate of the optimizer.
- Returns
learning rate. 0.0 if no optimizer was found
-
std::vector<MSTensor> GetOutputs()
Obtains all output tensors of the model.
- Returns
The vector that includes all output tensors.
-
inline std::vector<std::string> GetOutputTensorNames()
Obtains names of all output tensors of the model.
- Returns
A vector that includes names of all output tensors.
-
inline MSTensor GetOutputByTensorName(const std::string &tensor_name)
Obtains the output tensor of the model by name.
- Returns
The output tensor with the given name, if the name is not found, an invalid tensor is returned.
-
inline std::vector<MSTensor> GetOutputsByNodeName(const std::string &node_name)
Get output MSTensors of model by node name.
Note
Deprecated, replace with GetOutputByTensorName
- Parameters
node_name – [in] Define node name.
- Returns
The vector of output MSTensor.
-
Status BindGLTexture2DMemory(const std::map<std::string, unsigned int> &inputGLTexture, std::map<std::string, unsigned int> *outputGLTexture)
Bind GLTexture2D object to cl Memory.
- Parameters
inputGlTexture – [in] The input GLTexture id for Model.
outputGLTexture – [in] The output GLTexture id for Model.
- Returns
Status of operation.
-
inline Status Build(const void *model_data, size_t data_size, ModelType model_type, const std::shared_ptr<Context> &model_context = nullptr, const Key &dec_key = {}, const std::string &dec_mode = kDecModeAesGcm)
Build a model from model buffer so that it can run on a device. Only valid for Lite.
- Parameters
model_data – [in] Define the buffer read from a model file.
size – [in] Define bytes number of model buffer.
model_type – [in] Define The type of model file. Options: ModelType::kMindIR, ModelType::kOM. Only ModelType::kMindIR is valid for Lite.
model_context – [in] Define the context used to store options during execution.
dec_key – [in] Define the key used to decrypt the ciphertext model. The key length is 16, 24, or 32.
dec_mode – [in] Define the decryption mode. Options: AES-GCM, AES-CBC.
- Returns
Status.
-
inline Status Build(const std::string &model_path, ModelType model_type, const std::shared_ptr<Context> &model_context = nullptr, const Key &dec_key = {}, const std::string &dec_mode = kDecModeAesGcm)
Load and build a model from model buffer so that it can run on a device. Only valid for Lite.
- Parameters
model_path – [in] Define the model path.
model_type – [in] Define The type of model file. Options: ModelType::kMindIR, ModelType::kOM. Only ModelType::kMindIR is valid for Lite.
model_context – [in] Define the context used to store options during execution.
dec_key – [in] Define the key used to decrypt the ciphertext model. The key length is 16, 24, or 32.
dec_mode – [in] Define the decryption mode. Options: AES-GCM, AES-CBC.
- Returns
Status.
Public Static Functions
-
static bool CheckModelSupport(enum DeviceType device_type, ModelType model_type)
Inference model.
- Parameters
device_type – [in] Device type,options are kGPU, kAscend etc.
model_type – [in] The type of model file, options are ModelType::kMindIR, ModelType::kOM.
- Returns
Is supported or not.
-
Status Build(GraphCell graph, const std::shared_ptr<Context> &model_context = nullptr, const std::shared_ptr<TrainCfg> &train_cfg = nullptr)