Class ModelParallelRunner
Defined in File model_parallel_runner.h
Class Documentation
-
class ModelParallelRunner
The ModelParallelRunner class is used to define a MindSpore ModelParallelRunner, facilitating Model management.
Public Functions
build a model parallel runner from model path so that it can run on a device. Supports importing the
ms
model (exported by theconverter_lite
tool) and themindir
model (exported by MindSpore or exported by theconverter_lite
tool). The support for thems
model will be removed in future iterations, and it is recommended to use themindir
model for inference. When using thems
model for inference, please keep the suffix name of the model as.ms
, otherwise it will not be recognized.- 参数
model_path – [in] Define the model path.
runner_config – [in] Define the config used to store options during model pool init.
- 返回
Status.
build a model parallel runner from model buffer so that it can run on a device. This interface only supports passing in
mindir
model file data.- 参数
model_data – [in] Define the buffer read from a model file.
data_size – [in] Define bytes number of model buffer.
runner_config – [in] Define the config used to store options during model pool init.
- 返回
Status.
-
std::vector<MSTensor> GetInputs()
Obtains all input tensors information of the model.
- 返回
The vector that includes all input tensors.
-
std::vector<MSTensor> GetOutputs()
Obtains all output tensors information of the model.
- 返回
The vector that includes all output tensors.
-
Status Predict(const std::vector<MSTensor> &inputs, std::vector<MSTensor> *outputs, const MSKernelCallBack &before = nullptr, const MSKernelCallBack &after = nullptr)
Inference ModelParallelRunner.
- 参数
inputs – [in] A vector where model inputs are arranged in sequence.
outputs – [out] Which is a pointer to a vector. The model outputs are filled in the container in sequence.
before – [in] CallBack before predict.
after – [in] CallBack after predict.
- 返回
Status.