Class Model
Defined in File model.h
Class Documentation
-
class Model
The Model class is used to define a MindSpore model, facilitating computational graph management.
Public Functions
Build a model from model buffer so that it can run on a device.
- 参数
model_data – [in] Define the buffer read from a model file.
data_size – [in] Define bytes number of model buffer.
model_type – [in] Define The type of model file. Options: ModelType::kMindIR, ModelType::kMindIR_Lite. Only ModelType::kMindIR_Lite is valid for Device-side Inference. Cloud-side Inference supports options ModelType::kMindIR and ModelType::kMindIR_Lite, but option odelType::kMindIR_Lite will be removed in future iterations.
model_context – [in] Define the context used to store options during execution.
- 返回
Status. kSuccess: build success, kLiteModelRebuild: build model repeatedly, Other: other types of errors.
Load and build a model from model buffer so that it can run on a device.
- 参数
model_path – [in] Define the model path.
model_type – [in] Define The type of model file. Options: ModelType::kMindIR, ModelType::kMindIR_Lite. Only ModelType::kMindIR_Lite is valid for Device-side Inference. Cloud-side Inference supports options ModelType::kMindIR and ModelType::kMindIR_Lite, but option odelType::kMindIR_Lite will be removed in future iterations.
model_context – [in] Define the context used to store options during execution.
- 返回
Status. kSuccess: build success, kLiteModelRebuild: build model repeatedly, Other: other types of errors.
Build a model from model buffer so that it can run on a device.
- 参数
model_data – [in] Define the buffer read from a model file.
data_size – [in] Define bytes number of model buffer.
model_type – [in] Define The type of model file. Options: ModelType::kMindIR, ModelType::kMindIR_Lite. Only ModelType::kMindIR_Lite is valid for Device-side Inference. Cloud-side Inference supports options ModelType::kMindIR and ModelType::kMindIR_Lite, but option odelType::kMindIR_Lite will be removed in future iterations.
model_context – [in] Define the context used to store options during execution.
dec_key – [in] Define the key used to decrypt the ciphertext model. The key length is 16.
dec_mode – [in] Define the decryption mode. Options: AES-GCM.
cropto_lib_path – [in] Define the openssl library path.
- 返回
Status. kSuccess: build success, kLiteModelRebuild: build model repeatedly, Other: other types of errors.
Load and build a model from model buffer so that it can run on a device.
- 参数
model_path – [in] Define the model path.
model_type – [in] Define The type of model file. Options: ModelType::kMindIR, ModelType::kMindIR_Lite. Only ModelType::kMindIR_Lite is valid for Device-side Inference. Cloud-side Inference supports options ModelType::kMindIR and ModelType::kMindIR_Lite, but option odelType::kMindIR_Lite will be removed in future iterations.
model_context – [in] Define the context used to store options during execution.
dec_key – [in] Define the key used to decrypt the ciphertext model. The key length is 16.
dec_mode – [in] Define the decryption mode. Options: AES-GCM.
cropto_lib_path – [in] Define the openssl library path.
- 返回
Status. kSuccess: build success, kLiteModelRebuild: build model repeatedly, Other: other types of errors.
Build a model.
- 参数
graph – [in] GraphCell is a derivative of Cell. Cell is not available currently. GraphCell can be constructed from Graph, for example, model.Build(GraphCell(graph), context).
model_context – [in] A context used to store options during execution.
train_cfg – [in] A config used by training.
- 返回
Status.
Build a Transfer Learning model where the backbone weights are fixed and the head weights are trainable.
- 参数
backbone – [in] The static, non-learnable part of the graph
head – [in] The trainable part of the graph
context – [in] A context used to store options during execution
train_cfg – [in] A config used by training
- 返回
Status
-
Status Resize(const std::vector<MSTensor> &inputs, const std::vector<std::vector<int64_t>> &dims)
Resize the shapes of inputs.
- 参数
inputs – [in] A vector that includes all input tensors in order.
dims – [in] Defines the new shapes of inputs, should be consistent with inputs.
- 返回
Status.
-
Status UpdateWeights(const std::vector<MSTensor> &new_weights)
Change the size and or content of weight tensors.
- 参数
new_weights – [in] a vector of tensors with new shapes and data to use in the model If data pointer is null, the data of the original tensors will be copied to the new ones
- 返回
Status.
-
Status UpdateWeights(const std::vector<std::vector<MSTensor>> &new_weights)
Change the size and or content of weight tensors.
- 参数
A – [in] vector where model constant are arranged in sequence
- 返回
Status.
-
Status Predict(const std::vector<MSTensor> &inputs, std::vector<MSTensor> *outputs, const MSKernelCallBack &before = nullptr, const MSKernelCallBack &after = nullptr)
Inference model API. If use this API in train mode, it’s equal to RunStep API.
- 参数
inputs – [in] A vector where model inputs are arranged in sequence.
outputs – [out] Which is a pointer to a vector. The model outputs are filled in the container in sequence.
before – [in] CallBack before predict.
after – [in] CallBack after predict.
- 返回
Status.
-
Status Predict(const MSKernelCallBack &before = nullptr, const MSKernelCallBack &after = nullptr)
Inference model API. If use this API in train mode, it’s equal to RunStep API.
- 参数
before – [in] CallBack before predict.
after – [in] CallBack after predict.
- 返回
Status.
-
Status RunStep(const MSKernelCallBack &before = nullptr, const MSKernelCallBack &after = nullptr)
Training API. Run model by step.
- 参数
before – [in] CallBack before RunStep.
after – [in] CallBack after RunStep.
- 返回
Status.
-
Status PredictWithPreprocess(const std::vector<std::vector<MSTensor>> &inputs, std::vector<MSTensor> *outputs, const MSKernelCallBack &before = nullptr, const MSKernelCallBack &after = nullptr)
Inference model with preprocess in model.
- 参数
inputs – [in] A vector where model inputs are arranged in sequence.
outputs – [out] Which is a pointer to a vector. The model outputs are filled in the container in sequence.
before – [in] CallBack before predict.
after – [in] CallBack after predict.
- 返回
Status.
-
Status Preprocess(const std::vector<std::vector<MSTensor>> &inputs, std::vector<MSTensor> *outputs)
Apply data preprocess if it exits in model.
- 参数
inputs – [in] A vector where model inputs are arranged in sequence.
outputs – [out] Which is a pointer to a vector. The model outputs are filled in the container in sequence.
- 返回
Status.
-
bool HasPreprocess()
Check if data preprocess exists in model.
- 返回
true if data preprocess exists.
-
inline Status LoadConfig(const std::string &config_path)
Load config file.
- 参数
config_path – [in] config file path.
- 返回
Status.
-
inline Status UpdateConfig(const std::string §ion, const std::pair<std::string, std::string> &config)
Update config.
- 参数
section – [in] define the config section.
config – [in] define the config will be updated.
- 返回
Status.
-
std::vector<MSTensor> GetInputs()
Obtains all input tensors of the model.
- 返回
The vector that includes all input tensors.
-
inline MSTensor GetInputByTensorName(const std::string &tensor_name)
Obtains the input tensor of the model by name.
- 返回
The input tensor with the given name, if the name is not found, an invalid tensor is returned.
-
std::vector<MSTensor> GetGradients() const
Obtain all gradient tensors of the model.
- 返回
The vector that includes all gradient tensors.
-
Status ApplyGradients(const std::vector<MSTensor> &gradients)
Update gradient tensors of the model.
- 参数
gradients – [in] A vector new gradients.
- 返回
Status of operation
-
std::vector<MSTensor> GetFeatureMaps() const
Obtain all weights tensors of the model.
- 返回
The vector that includes all weights tensors.
-
std::vector<MSTensor> GetTrainableParams() const
Obtain all trainable parameters of the model optimizers.
- 返回
The vector that includes all trainable parameters.
-
Status UpdateFeatureMaps(const std::vector<MSTensor> &new_weights)
Update weights tensors of the model.
- 参数
new_weights – [in] A vector new weights.
- 返回
Status of operation
-
std::vector<MSTensor> GetOptimizerParams() const
Obtain optimizer params tensors of the model.
- 返回
The vector that includes all params tensors.
-
Status SetOptimizerParams(const std::vector<MSTensor> ¶ms)
Update the optimizer parameters.
- 参数
params – [in] A vector new optimizer params.
- 返回
Status of operation.
-
Status SetupVirtualBatch(int virtual_batch_multiplier, float lr = -1.0f, float momentum = -1.0f)
Setup training with virtual batches.
- 参数
virtual_batch_multiplier – [in] - virtual batch multiplier, use any number < 1 to disable.
lr – [in] - learning rate to use for virtual batch, -1 for internal configuration.
momentum – [in] - batch norm momentum to use for virtual batch, -1 for internal configuration.
- 返回
Status of operation.
-
Status SetLearningRate(float learning_rate)
Set the Learning Rate of the training.
- 参数
learning_rate – [in] to set.
- 返回
Status of operation.
-
float GetLearningRate()
Get the Learning Rate of the optimizer.
- 返回
Learning rate. 0.0 if no optimizer was found.
-
Status InitMetrics(std::vector<Metrics*> metrics)
Initialize object with metrics.
- 参数
metrics – [in] A vector of metrics objects.
- 返回
0 on success or -1 in case of error
-
std::vector<MSTensor> GetOutputs()
Obtains all output tensors of the model.
- 返回
The vector that includes all output tensors.
-
inline std::vector<std::string> GetOutputTensorNames()
Obtains names of all output tensors of the model.
- 返回
A vector that includes names of all output tensors.
-
inline MSTensor GetOutputByTensorName(const std::string &tensor_name)
Obtains the output tensor of the model by name.
- 返回
The output tensor with the given name, if the name is not found, an invalid tensor is returned.
-
inline std::vector<MSTensor> GetOutputsByNodeName(const std::string &node_name)
Get output MSTensors of model by node name.
说明
Deprecated, replace with GetOutputByTensorName
- 参数
node_name – [in] Define node name.
- 返回
The vector of output MSTensor.
-
Status BindGLTexture2DMemory(const std::map<std::string, unsigned int> &inputGLTexture, std::map<std::string, unsigned int> *outputGLTexture)
Bind GLTexture2D object to cl Memory.
-
Status SetTrainMode(bool train)
Set the model running mode.
- 参数
train – [in] True means model runs in Train Mode, otherwise Eval Mode.
- 返回
Status of operation.
-
bool GetTrainMode() const
Get the model running mode.
- 返回
Is Train Mode or not.
Performs the training Loop in Train Mode.
- 参数
epochs – [in] The number of epoch to run.
ds – [in] A smart pointer to MindData Dataset object.
cbs – [in] A vector of TrainLoopCallBack objects.
- 返回
Status of operation.
Performs the training loop over all data in Eval Mode.
- 参数
ds – [in] A smart pointer to MindData Dataset object.
cbs – [in] A vector of TrainLoopCallBack objects.
- 返回
Status of operation.
-
inline std::string GetModelInfo(const std::string &key)
Get model info by key.
- 参数
key – [in] The key of model info key-value pair
- 返回
The value of the model info associated with the given key.
Public Static Functions
-
static bool CheckModelSupport(enum DeviceType device_type, ModelType model_type)
Inference model.
- 参数
device_type – [in] Device type,options are kGPU, kAscend etc.
model_type – [in] The type of model file, options are ModelType::kMindIR, ModelType::kOM.
- 返回
Is supported or not.