.. MindSpore documentation master file, created by sphinx-quickstart on Thu Mar 24 10:00:00 2020. You can adapt this file completely to your liking, but it should at least contain the root `toctree` directive. MindSpore Lite API ======================= MindSpore Lite API 支持情况汇总 -------------------------------------- +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | 类名 | 接口说明 | C++ 接口 | Python 接口 | +=====================+=========================================================================================================+==========================================================================================================================================================================================================================+============================================================================================================================================================================================================================================================================================================================================================================+ | Context | 设置运行时的线程数 | void SetThreadNum(int32_t thread_num) | `Context.cpu.thread_num <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Context.html#mindspore_lite.Context.target>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Context | 获取当前线程数设置 | int32_t GetThreadNum() const | `Context.cpu.thread_num <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Context.html#mindspore_lite.Context.target>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Context | 设置运行时的算子并行推理数目 | void SetInterOpParallelNum(int32_t parallel_num) | `Context.cpu.inter_op_parallel_num <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Context.html#mindspore_lite.Context.target>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Context | 获取当前算子并行数设置 | int32_t GetInterOpParallelNum() const | `Context.cpu.inter_op_parallel_num <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Context.html#mindspore_lite.Context.target>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Context | 设置运行时的CPU绑核策略 | void SetThreadAffinity(int mode) | `Context.cpu.thread_affinity_mode <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Context.html#mindspore_lite.Context.target>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Context | 获取当前CPU绑核策略 | int GetThreadAffinityMode() const | `Context.cpu.thread_affinity_mode <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Context.html#mindspore_lite.Context.target>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Context | 设置运行时的CPU绑核列表 | void SetThreadAffinity(const std::vector<int> &core_list) | `Context.cpu.thread_affinity_core_list <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Context.html#mindspore_lite.Context.target>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Context | 获取当前CPU绑核列表 | std::vector<int32_t> GetThreadAffinityCoreList() const | `Context.cpu.thread_affinity_core_list <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Context.html#mindspore_lite.Context.target>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Context | 设置运行时是否支持并行 | void SetEnableParallel(bool is_parallel) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Context | 获取当前是否支持并行 | bool GetEnableParallel() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Context | 设置内置Delegate模式,以使用第三方AI框架辅助推理 | void SetBuiltInDelegate(DelegateMode mode) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Context | 获取当前内置Delegate模式 | DelegateMode GetBuiltInDelegate() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Context | 设置Delegate,Delegate定义了用于支持第三方AI框架接入的代理 | set_delegate(const std::shared_ptr<AbstractDelegate> &delegate) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Context | 获取当前Delegate | std::shared_ptr<AbstractDelegate> get_delegate() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Context | 在多设备中,配置量化模型是否以浮点模式运行 | void SetMultiModalHW(bool float_mode) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Context | 获取当前配置中,量化模型的运行模式 | bool GetMultiModalHW() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Context | 修改该context下的DeviceInfoContext数组 | std::vector<std::shared_ptr<DeviceInfoContext>> &MutableDeviceInfo() | 封装在 `Context.target <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Context.html#mindspore_lite.Context.target>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | DeviceInfoContext | 获取该DeviceInfoContext的类型 | enum DeviceType GetDeviceType() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | DeviceInfoContext | 将DeviceInfoContext转换为T类型的指针 | std::shared_ptr<T> Cast() | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | DeviceInfoContext | 设置设备生产商名 | void SetProvider(const std::string &provider) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | DeviceInfoContext | 获取设备的生产商名 | std::string GetProvider() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | DeviceInfoContext | 设置生产商设备名 | void SetProviderDevice(const std::string &device) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | DeviceInfoContext | 获取生产商设备名 | std::string GetProviderDevice() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | DeviceInfoContext | 设置内存管理器 | void SetAllocator(const std::shared_ptr<Allocator> &allocator) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | DeviceInfoContext | 获取内存管理器 | std::shared_ptr<Allocator> GetAllocator() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | CPUDeviceInfo | 获取该DeviceInfoContext的类型 | enum DeviceType GetDeviceType() const | `context.cpu <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Context.html#mindspore_lite.Context.target>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | CPUDeviceInfo | 设置是否以FP16精度进行推理 | void SetEnableFP16(bool is_fp16) | `Context.cpu.precision_mode <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Context.html#mindspore_lite.Context.target>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | CPUDeviceInfo | 获取当前是否以FP16精度进行推理 | bool GetEnableFP16() const | `Context.cpu.precision_mode <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Context.html#mindspore_lite.Context.target>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | GPUDeviceInfo | 获取该DeviceInfoContext的类型 | enum DeviceType GetDeviceType() const | `Context.gpu <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Context.html#mindspore_lite.Context.target>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | GPUDeviceInfo | 设置设备ID | void SetDeviceID(uint32_t device_id) | `Context.gpu.device_id <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Context.html#mindspore_lite.Context.target>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | GPUDeviceInfo | 获取设备ID | uint32_t GetDeviceID() const | `Context.gpu.device_id <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Context.html#mindspore_lite.Context.target>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | GPUDeviceInfo | 获取当前运行的RANK ID | int GetRankID() const | `Context.gpu.rank_id <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Context.html#mindspore_lite.Context.target>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | GPUDeviceInfo | 获取当前运行的GROUP SIZE | int GetGroupSize() const | `Context.gpu.group_size <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Context.html#mindspore_lite.Context.target>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | GPUDeviceInfo | 设置推理时算子精度 | void SetPrecisionMode(const std::string &precision_mode) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | GPUDeviceInfo | 获取推理时算子精度 | std::string GetPrecisionMode() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | GPUDeviceInfo | 设置是否以FP16精度进行推理 | void SetEnableFP16(bool is_fp16) | `Context.gpu.precision_mode <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Context.html#mindspore_lite.Context.target>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | GPUDeviceInfo | 获取是否以FP16精度进行推理 | bool GetEnableFP16() const | `Context.gpu.precision_mode <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Context.html#mindspore_lite.Context.target>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | GPUDeviceInfo | 设置是否绑定OpenGL纹理数据 | void SetEnableGLTexture(bool is_enable_gl_texture) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | GPUDeviceInfo | 获取是否绑定OpenGL纹理数据 | bool GetEnableGLTexture() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | GPUDeviceInfo | 设置指定OpenGL EGLContext | void SetGLContext(void \*gl_context) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | GPUDeviceInfo | 获取当前OpenGL EGLContext | void \*GetGLContext() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | GPUDeviceInfo | 设置指定OpenGL EGLDisplay | void SetGLDisplay(void \*gl_display) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | GPUDeviceInfo | 获取当前OpenGL EGLDisplay | void \*GetGLDisplay() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | AscendDeviceInfo | 获取该DeviceInfoContext的类型 | enum DeviceType GetDeviceType() const | `Context.ascend <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Context.html#mindspore_lite.Context.target>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | AscendDeviceInfo | 设置设备ID | void SetDeviceID(uint32_t device_id) | `Context.ascend.device_id <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Context.html#mindspore_lite.Context.target>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | AscendDeviceInfo | 获取设备ID | uint32_t GetDeviceID() const | `Context.ascend.device_id <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Context.html#mindspore_lite.Context.target>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | AscendDeviceInfo | 设置AIPP配置文件路径 | void SetInsertOpConfigPath(const std::string &cfg_path) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | AscendDeviceInfo | 获取AIPP配置文件路径 | std::string GetInsertOpConfigPath() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | AscendDeviceInfo | 设置模型输入format | void SetInputFormat(const std::string &format) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | AscendDeviceInfo | 获取模型输入format | std::string GetInputFormat() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | AscendDeviceInfo | 设置模型输入shape | void SetInputShape(const std::string &shape) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | AscendDeviceInfo | 获取模型输入shape | std::string GetInputShape() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | AscendDeviceInfo | 设置模型输入shape | void SetInputShapeMap(const std::map<int, std::vector <int>> &shape) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | AscendDeviceInfo | 获取模型输入shape | std::map<int, std::vector <int>> GetInputShapeMap() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | AscendDeviceInfo | 设置模型动态batch的挡位,支持个数范围[2, 100] | void SetDynamicBatchSize(const std::vector<size_t> &dynamic_batch_size) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | AscendDeviceInfo | 获取已配置模型的动态batch | std::string GetDynamicBatchSize() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | AscendDeviceInfo | 设置模型动态分辨率档位 | void SetDynamicImageSize(const std::string &dynamic_image_size) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | AscendDeviceInfo | 获取已配置模型的动态分辨率 | std::string GetDynamicImageSize() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | AscendDeviceInfo | 设置模型输出type | void SetOutputType(enum DataType output_type) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | AscendDeviceInfo | 获取模型输出type | enum DataType GetOutputType() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | AscendDeviceInfo | 设置模型精度模式 | void SetPrecisionMode(const std::string &precision_mode) | `Context.ascend.precision_mode <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Context.html#mindspore_lite.Context.target>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | AscendDeviceInfo | 获取模型精度模式 | std::string GetPrecisionMode() const | `Context.ascend.precision_mode <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Context.html#mindspore_lite.Context.target>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | AscendDeviceInfo | 设置算子实现方式 | void SetOpSelectImplMode(const std::string &op_select_impl_mode) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | AscendDeviceInfo | 获取已配置的算子选择模式 | std::string GetOpSelectImplMode() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | AscendDeviceInfo | 设置融合开关配置文件,可指定关闭特定融合规则 | void SetFusionSwitchConfigPath(const std::string &cfg_path) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | AscendDeviceInfo | 获取已配置的融合开关文件路径 | std::string GetFusionSwitchConfigPath() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | AscendDeviceInfo | 设置缓存优化模式 | void SetBufferOptimizeMode(const std::string &buffer_optimize_mode) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | AscendDeviceInfo | 获取缓存优化模式 | std::string GetBufferOptimizeMode() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | KirinNPUDeviceInfo | 获取该DeviceInfoContext的类型 | enum DeviceType GetDeviceType() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | KirinNPUDeviceInfo | 设置是否以FP16精度进行推理 | void SetEnableFP16(bool is_fp16) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | KirinNPUDeviceInfo | 获取是否以FP16精度进行推理 | bool GetEnableFP16() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | KirinNPUDeviceInfo | 设置NPU频率 | void SetFrequency(int frequency) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | KirinNPUDeviceInfo | 获取NPU频率 | int GetFrequency() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 从内存缓冲区加载模型,并将模型编译至可在Device上运行的状态 | Status Build(const void \*model_data, size_t data_size, ModelType model_type, const std::shared_ptr <Context> &model_context = nullptr) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 从内存缓冲区加载模型,并将模型编译至可在Device上运行的状态 | Status Build(const std::string &model_path, ModelType model_type, const std::shared_ptr <Context> &model_context = nullptr) | `Model.build_from_file <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Model.html#mindspore_lite.Model.build_from_file>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 根据路径读取加载模型,并将模型编译至可在Device上运行的状态 | Status Build(const void \*model_data, size_t data_size, ModelType model_type, const std::shared_ptr <Context> &model_context, const Key &dec_key, const std::string &dec_mode, const std::string &cropto_lib_path) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 根据路径读取加载模型,并将模型编译至可在Device上运行的状态 | Status Build(const std::string &model_path, ModelType model_type, const std::shared_ptr <Context> &model_context, const Key &dec_key, const std::string &dec_mode, const std::string &cropto_lib_path) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 将GraphCell存储的模型编译至可在Device上运行的状态 | Status Build(GraphCell graph, const std::shared_ptr <Context> &model_context = nullptr, const std::shared_ptr <TrainCfg> &train_cfg = nullptr) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 将GraphCell存储的模型编译至可在Device上运行的状态 | Status Build(GraphCell graph, Node \*optimizer, std::vector<Expr \*> inputs, const std::shared_ptr <Context> &model_context, const std::shared_ptr <TrainCfg> &train_cfg) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 构建一个迁移学习模型,其中主干权重是固定的,头部权重是可训练的 | Status BuildTransferLearning(GraphCell backbone, GraphCell head, const std::shared_ptr <Context> &context, const std::shared_ptr <TrainCfg> &train_cfg = nullptr) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 调整已编译模型的输入张量形状 | Status Resize(const std::vector <MSTensor> &inputs, const std::vector <std::vector<int64_t>> &dims) | `Model.resize <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Model.html#mindspore_lite.Model.resize>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 更新模型的权重Tensor的大小和内容 | Status UpdateWeights(const std::vector <MSTensor> &new_weights) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 推理模型 | Status Predict(const std::vector <MSTensor> &inputs, std::vector <MSTensor> \*outputs, const MSKernelCallBack &before = nullptr, const MSKernelCallBack &after = nullptr) | `Model.predict <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Model.html#mindspore_lite.Model.predict) (not support callback>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 仅带callback的推理模型 | Status Predict(const MSKernelCallBack &before = nullptr, const MSKernelCallBack &after = nullptr) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 单步训练模型 | Status RunStep(const MSKernelCallBack &before = nullptr, const MSKernelCallBack &after = nullptr) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 进行推理模型,并在推理前进行数据预处理 | Status PredictWithPreprocess(const std::vector <std::vector<MSTensor>> &inputs, std::vector <MSTensor> \*outputs, const MSKernelCallBack &before = nullptr, const MSKernelCallBack &after = nullptr) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 若模型配置了数据预处理,对模型输入数据进行数据预处理 | Status Preprocess(const std::vector <std::vector<MSTensor>> &inputs, std::vector <MSTensor> \*outputs) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 检查模型是否配置了数据预处理 | bool HasPreprocess() | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 根据路径读取配置文件 | Status LoadConfig(const std::string &config_path) | 封装在 `Model.build_from_file <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Model.html#mindspore_lite.Model.build_from_file>`__ 方法的 `config_path` 参数中 | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 刷新配置 | Status UpdateConfig(const std::string §ion, const std::pair<std::string, std::string> &config) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 获取模型所有输入张量 | std::vector <MSTensor> GetInputs() | `Model.get_inputs <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Model.html#mindspore_lite.Model.get_inputs>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 获取模型指定名字的输入张量 | MSTensor GetInputByTensorName(const std::string &tensor_name) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 获取所有Tensor的梯度 | std::vector <MSTensor> GetGradients() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 应用所有Tensor的梯度 | Status ApplyGradients(const std::vector <MSTensor> &gradients) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 获取模型的所有权重Tensors | std::vector <MSTensor> GetFeatureMaps() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 获取optimizer中所有参与权重更新的MSTensor | std::vector <MSTensor> GetTrainableParams() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 更新模型的权重Tensor内容 | Status UpdateFeatureMaps(const std::vector <MSTensor> &new_weights) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 获取optimizer参数MSTensor | std::vector <MSTensor> GetOptimizerParams() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 更新optimizer参数 | Status SetOptimizerParams(const std::vector <MSTensor> ¶ms) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 设置虚拟batch用于训练 | Status SetupVirtualBatch(int virtual_batch_multiplier, float lr = -1.0f, float momentum = -1.0f) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 设置训练学习率 | Status SetLearningRate(float learning_rate) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 获取优化器学习率 | float GetLearningRate() | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 训练指标参数初始化 | Status InitMetrics(std::vector<Metrics \*> metrics) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 获取训练指标参数 | std::vector<Metrics \*> GetMetrics() | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 获取模型所有输出张量 | std::vector <MSTensor> GetOutputs() | 封装在 `Model.predict <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Model.html#mindspore_lite.Model.predict) (not support callback>`__ 的返回值 | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 获取模型所有输出张量的名字 | std::vector <std::string> GetOutputTensorNames() | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 获取模型指定名字的输出张量 | MSTensor GetOutputByTensorName(const std::string &tensor_name) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 通过节点名获取模型的MSTensors输出张量 | std::vector <MSTensor> GetOutputsByNodeName(const std::string &node_name) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 将OpenGL纹理数据与模型的输入和输出进行绑定 | Status BindGLTexture2DMemory(const std::map<std::string, unsigned int> &inputGLTexture, std::map<std::string, unsigned int> \*outputGLTexture) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | session设置训练模式 | Status SetTrainMode(bool train) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 获取session是否是训练模式 | bool GetTrainMode() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 模型训练 | Status Train(int epochs, std::shared_ptr <dataset::Dataset>ds, std::vector<TrainCallBack \*> cbs) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 模型验证 | Status Evaluate(std::shared_ptr <dataset::Dataset> ds, std::vector<TrainCallBack \*> cbs) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 检查设备是否支持该模型 | static bool CheckModelSupport(enum DeviceType device_type, ModelType model_type) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | RunnerConfig | 设置RunnerConfig的worker的个数 | void SetWorkersNum(int32_t workers_num) | `Context.parallel.workers_num <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Context.html#mindspore_lite.Context>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | RunnerConfig | 获取RunnerConfig的worker的个数 | int32_t GetWorkersNum() const | `Context.parallel.workers_num <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Context.html#mindspore_lite.Context>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | RunnerConfig | 设置RunnerConfig的context参数 | void SetContext(const std::shared_ptr <Context> &context) | 封装在 `Context.parallel <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Context.html#mindspore_lite.Context>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | RunnerConfig | 获取RunnerConfig配置的上下文参数 | std::shared_ptr <Context> GetContext() const | 封装在 `Context.parallel <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Context.html#mindspore_lite.Context>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | RunnerConfig | 设置RunnerConfig的配置参数 | void SetConfigInfo(const std::string §ion, const std::map<std::string, std::string> &config) | `Context.parallel.config_info <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Context.html#mindspore_lite.Context>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | RunnerConfig | 获取RunnerConfig配置参数信息 | std::map<std::string, std::map<std::string, std::string>> GetConfigInfo() const | `Context.parallel.config_info <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Context.html#mindspore_lite.Context>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | RunnerConfig | 设置RunnerConfig中的配置文件路径 | void SetConfigPath(const std::string &config_path) | `Context.parallel.config_path <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Context.html#mindspore_lite.Context>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | RunnerConfig | 获取RunnerConfig中的配置文件的路径 | std::string GetConfigPath() const | `Context.parallel.config_path <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Context.html#mindspore_lite.Context>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | ModelParallelRunner | 根据路径读取加载模型,生成一个或者多个模型,并将所有模型编译至可在Device上运行的状态 | Status Init(const std::string &model_path, const std::shared_ptr <RunnerConfig> &runner_config = nullptr) | `Model.parallel_runner.build_from_file <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.ModelParallelRunner.html#mindspore_lite.ModelParallelRunner.build_from_file>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | ModelParallelRunner | 根据模文件数据,生成一个或者多个模型,并将所有模型编译至可在Device上运行的状态 | Status Init(const void \*model_data, const size_t data_size, const std::shared_ptr <RunnerConfig> &runner_config = nullptr) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | ModelParallelRunner | 获取模型所有输入张量 | std::vector <MSTensor> GetInputs() | `Model.parallel_runner.get_inputs <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.ModelParallelRunner.html#mindspore_lite.ModelParallelRunner.get_inputs>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | ModelParallelRunner | 获取模型所有输出张量 | std::vector <MSTensor> GetOutputs() | 封装在 `Model.parallel_runner.predict <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.ModelParallelRunner.html#mindspore_lite.ModelParallelRunner.predict>`__ 的返回值 | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | ModelParallelRunner | 并发推理模型 | Status Predict(const std::vector <MSTensor> &inputs, std::vector <MSTensor> \*outputs,const MSKernelCallBack &before = nullptr, const MSKernelCallBack &after = nullptr) | `Model.parallel_runner.predict <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.ModelParallelRunner.html#mindspore_lite.ModelParallelRunner.predict>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 创建一个MSTensor对象,其数据需复制后才能由Model访问 | static inline MSTensor \*CreateTensor(const std::string &name, DataType type, const std::vector<int64_t> &shape, const void \*data, size_t data_len) noexcept | `Tensor <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Tensor.html#mindspore_lite.Tensor>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 创建一个MSTensor对象,其数据可以直接由Model访问 | static inline MSTensor \*CreateRefTensor(const std::string &name, DataType type, const std::vector<int64_t> &shape, const void \*data, size_t data_len, bool own_data = true) noexcept | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 创建一个MSTensor对象,其device数据可以直接由Model访问 | static inline MSTensor CreateDeviceTensor(const std::string &name, DataType type, const std::vector<int64_t> &shape, void \*data, size_t data_len) noexcept | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 创建一个MSTensor对象,其数据由文件路径file所指定 | static inline MSTensor \*CreateTensorFromFile(const std::string &file, DataType type = DataType::kNumberTypeUInt8, const std::vector<int64_t> &shape = {}) noexcept | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 创建一个字符串类型的MSTensor对象,其数据需复制后才能由Model访问 | static inline MSTensor \*StringsToTensor(const std::string &name, const std::vectorstd::string &str) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 将字符串类型的MSTensor对象解析为字符串 | static inline std::vectorstd::string TensorToStrings(const MSTensor &tensor) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 销毁一个由 `Clone` 、 `StringsToTensor` 、 `CreateRefTensor` 或 `CreateTensor` 所创建的对象 | static void DestroyTensorPtr(MSTensor \*tensor) noexcept | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 获取MSTensor的名字 | std::string Name() const | `Tensor.name <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Tensor.html#mindspore_lite.Tensor.name>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 获取MSTensor的数据类型 | enum DataType DataType() const | `Tensor.dtype <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Tensor.html#mindspore_lite.Tensor.dtype>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 获取MSTensor的Shape | const std::vector<int64_t> &Shape() const | `Tensor.shape <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Tensor.html#mindspore_lite.Tensor.shape>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 获取MSTensor的元素个数 | int64_t ElementNum() const | `Tensor.element_num <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Tensor.html#mindspore_lite.Tensor.element_num>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 获取指向MSTensor中的数据拷贝的智能指针 | std::shared_ptr <const void> Data() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 获取MSTensor中的数据的指针 | void \*MutableData() | 封装在 `Tensor.get_data_to_numpy <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Tensor.html#mindspore_lite.Tensor.get_data_to_numpy>`__ 和 `Tensor.set_data_from_numpy <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Tensor.html#mindspore_lite.Tensor.set_data_from_numpy>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 获取MSTensor中的数据的以字节为单位的内存长度 | size_t DataSize() const | `Tensor.data_size <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Tensor.html#mindspore_lite.Tensor.data_size>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 判断MSTensor中的数据是否是常量数据 | bool IsConst() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 判断MSTensor中是否在设备上 | bool IsDevice() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 获取指向深拷贝副本的指针 | MSTensor \*Clone() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 判断MSTensor是否合法 | bool operator==(std::nullptr_t) const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 判断MSTensor是否非法 | bool operator!=(std::nullptr_t) const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 判断MSTensor是否与另一个MSTensor相等 | bool operator==(const MSTensor &tensor) const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 判断MSTensor是否与另一个MSTensor不相等 | bool operator!=(const MSTensor &tensor) const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 设置MSTensor的Shape | void SetShape(const std::vector<int64_t> &shape) | `Tensor.shape <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Tensor.html#mindspore_lite.Tensor.shape>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 设置MSTensor的DataType | void SetDataType(enum DataType data_type) | `Tensor.dtype <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Tensor.html#mindspore_lite.Tensor.dtype>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 设置MSTensor的名字 | void SetTensorName(const std::string &name) | `Tensor.name <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Tensor.html#mindspore_lite.Tensor.name>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 设置MSTensor数据所属的内存池 | void SetAllocator(std::shared_ptr <Allocator> allocator) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 获取MSTensor数据所属的内存池 | std::shared_ptr <Allocator> allocator() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 设置MSTensor数据的format | void SetFormat(mindspore::Format format) | `Tensor.format <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Tensor.html#mindspore_lite.Tensor.format>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 获取MSTensor数据的format | mindspore::Format format() const | `Tensor.format <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.Tensor.html#mindspore_lite.Tensor.format>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 设置指向MSTensor数据的指针 | void SetData(void \*data, bool own_data = true) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 设置MSTensor数据的设备地址 | void SetDeviceData(void \*data) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 获取由SetDeviceData接口设置的MSTensor数据的设备地址 | void \*GetDeviceData() | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 获取MSTensor的量化参数 | std::vector <QuantParam> QuantParams() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 设置MSTensor的量化参数 | void SetQuantParams(std::vector <QuantParam> quant_params) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | ModelGroup | 构造ModelGroup对象,指示共享工作空间内存或共享权重内存,默认共享工作空间内存 | ModelGroup(ModelGroupFlag flags = ModelGroupFlag::kShareWorkspace) | `ModelGroup <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.ModelGroup.html#mindspore_lite.ModelGroup>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | ModelGroup | 共享权重内存时,添加需要共享权重内存的模型对象 | Status AddModel(const std::vector<Model> &model_list) | `ModelGroup.add_model <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.ModelGroup.html#mindspore_lite.ModelGroup.add_model>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | ModelGroup | 共享工作空间内存时,添加需要共享工作空间内存的模型路径 | Status AddModel(const std::vector<std::string> &model_path_list) | `ModelGroup.add_model <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.ModelGroup.html#mindspore_lite.ModelGroup.add_model>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | ModelGroup | 共享工作空间内存时,添加需要共享工作空间内存的模型缓存 | Status AddModel(const std::vector<std::pair<const void \*, size_t>> &model_buff_list) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | ModelGroup | 共享工作空间内存时,计算最大的工作空间内存大小 | Status CalMaxSizeOfWorkspace(ModelType model_type, const std::shared_ptr<Context> &ms_context) | `ModelGroup.cal_max_size_of_workspace <https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite/mindspore_lite.ModelGroup.html#mindspore_lite.ModelGroup.cal_max_size_of_workspace>`__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ MindSpore Lite API ======================= .. toctree:: :maxdepth: 1 :caption: C++ API api_cpp/mindspore api_cpp/mindspore_api api_cpp/mindspore_api_utils api_cpp/mindspore_converter api_cpp/mindspore_dataset api_cpp/mindspore_dataset_config api_cpp/mindspore_dataset_text api_cpp/mindspore_dataset_transforms api_cpp/mindspore_dataset_vision api_cpp/mindspore_kernel api_cpp/mindspore_ops api_cpp/mindspore_registry api_cpp/mindspore_registry_opencl api_cpp/lite_cpp_example .. toctree:: :maxdepth: 1 :caption: JAVA API api_java/class_list api_java/model api_java/model_parallel_runner api_java/mscontext api_java/mstensor api_java/runner_config api_java/graph api_java/lite_java_example .. toctree:: :maxdepth: 1 :caption: Python API mindspore_lite .. toctree:: :maxdepth: 1 :caption: C API api_c/context_c api_c/data_type_c api_c/format_c api_c/model_c api_c/tensor_c api_c/types_c api_c/lite_c_example