Class Model

Class Documentation

class Model

The Model class is used to define a MindSpore model, facilitating computational graph management.

Public Functions

Status Build(GraphCell graph, const std::shared_ptr<Context> &model_context = nullptr, const std::shared_ptr<TrainCfg> &train_cfg = nullptr)

Builds a model so that it can run on a device.

Parameters
  • graph[in] GraphCell is a derivative of Cell. Cell is not available currently. GraphCell can be constructed from Graph, for example, model.Build(GraphCell(graph), context).

  • model_context[in] A context used to store options during execution.

  • train_cfg[in] A config used by training.

Returns

Status.

Status Resize(const std::vector<MSTensor> &inputs, const std::vector<std::vector<int64_t>> &dims)

Resizes the shapes of inputs.

Parameters
  • inputs[in] A vector that includes all input tensors in order.

  • dims[in] Defines the new shapes of inputs, should be consistent with inputs.

Returns

Status.

Status Predict(const std::vector<MSTensor> &inputs, std::vector<MSTensor> *outputs, const MSKernelCallBack &before = nullptr, const MSKernelCallBack &after = nullptr)

Inference model.

Parameters
  • inputs[in] A vector where model inputs are arranged in sequence.

  • outputs[out] Which is a pointer to a vector. The model outputs are filled in the container in sequence.

  • before[in] CallBack before predict.

  • after[in] CallBack after predict.

Returns

Status.

Status PredictWithPreprocess(const std::vector<MSTensor> &inputs, std::vector<MSTensor> *outputs, const MSKernelCallBack &before = nullptr, const MSKernelCallBack &after = nullptr)

Inference model with preprocess in model.

Parameters
  • inputs[in] A vector where model inputs are arranged in sequence.

  • outputs[out] Which is a pointer to a vector. The model outputs are filled in the container in sequence.

  • whether[in] to use data preprocess in model.

  • before[in] CallBack before predict.

  • after[in] CallBack after predict.

Returns

Status.

Status Preprocess(const std::vector<MSTensor> &inputs, std::vector<MSTensor> *outputs)

Apply data preprocess if it exits in model.

Parameters
  • inputs[in] A vector where model inputs are arranged in sequence.

  • outputs[out] Which is a pointer to a vector. The model outputs are filled in the container in sequence.

Returns

Status.

bool HasPreprocess()

Check if data preprocess exists in model.

Returns

true if data preprocess exists.

inline Status LoadConfig(const std::string &config_path)

Load config file.

Parameters

config_path[in] config file path.

Returns

Status.

std::vector<MSTensor> GetInputs()

Obtains all input tensors of the model.

Returns

The vector that includes all input tensors.

inline MSTensor GetInputByTensorName(const std::string &tensor_name)

Obtains the input tensor of the model by name.

Returns

The input tensor with the given name, if the name is not found, an invalid tensor is returned.

std::vector<MSTensor> GetGradients() const

Obtains all gradient tensors of the model.

Returns

The vector that includes all gradient tensors.

Status ApplyGradients(const std::vector<MSTensor> &gradients)

update gradient tensors of the model.

Parameters

inputs[in] A vector new gradients.

Returns

Status of operation

std::vector<MSTensor> GetOptimizerParams() const

Obtains optimizer params tensors of the model.

Returns

The vector that includes all params tensors.

Status SetOptimizerParams(const std::vector<MSTensor> &params)

update the optimizer parameters

Parameters

inputs[in] A vector new optimizer params.

Returns

Status of operation

std::vector<MSTensor> GetOutputs()

Obtains all output tensors of the model.

Returns

The vector that includes all output tensors.

inline std::vector<std::string> GetOutputTensorNames()

Obtains names of all output tensors of the model.

Returns

A vector that includes names of all output tensors.

inline MSTensor GetOutputByTensorName(const std::string &tensor_name)

Obtains the output tensor of the model by name.

Returns

The output tensor with the given name, if the name is not found, an invalid tensor is returned.

inline std::vector<MSTensor> GetOutputsByNodeName(const std::string &node_name)

Get output MSTensors of model by node name.

Note

Deprecated, replace with GetOutputByTensorName

Parameters

node_name[in] Define node name.

Returns

The vector of output MSTensor.

inline Status Build(const void *model_data, size_t data_size, ModelType model_type, const std::shared_ptr<Context> &model_context = nullptr, const Key &dec_key = {}, const std::string &dec_mode = kDecModeAesGcm)

Build a model from model buffer so that it can run on a device. Only valid for Lite.

Parameters
  • model_data[in] Define the buffer read from a model file.

  • size[in] Define bytes number of model buffer.

  • model_type[in] Define The type of model file. Options: ModelType::kMindIR, ModelType::kOM. Only ModelType::kMindIR is valid for Lite.

  • model_context[in] Define the context used to store options during execution.

  • dec_key[in] Define the key used to decrypt the ciphertext model. The key length is 16, 24, or 32.

  • dec_mode[in] Define the decryption mode. Options: AES-GCM, AES-CBC.

Returns

Status.

inline Status Build(const std::string &model_path, ModelType model_type, const std::shared_ptr<Context> &model_context = nullptr, const Key &dec_key = {}, const std::string &dec_mode = kDecModeAesGcm)

Load and build a model from model buffer so that it can run on a device. Only valid for Lite.

Parameters
  • model_path[in] Define the model path.

  • model_type[in] Define The type of model file. Options: ModelType::kMindIR, ModelType::kOM. Only ModelType::kMindIR is valid for Lite.

  • model_context[in] Define the context used to store options during execution.

  • dec_key[in] Define the key used to decrypt the ciphertext model. The key length is 16, 24, or 32.

  • dec_mode[in] Define the decryption mode. Options: AES-GCM, AES-CBC.

Returns

Status.

Public Static Functions

static bool CheckModelSupport(enum DeviceType device_type, ModelType model_type)

Inference model.

Parameters
  • device_type[in] Device type,options are kGPU, kAscend910, etc.

  • model_type[in] The type of model file, options are ModelType::kMindIR, ModelType::kOM.

Returns

Is supported or not.