mindspore_lite.Converter

View Source On Gitee
class mindspore_lite.Converter[source]

Constructs a Converter class.

Used in the following scenarios:

  1. Convert the third-party model into MindSpore model or MindSpore Lite model.

  2. Convert MindSpore model into MindSpore model or MindSpore Lite model.

Convert to MindSpore model is recommended. Currently, Convert to MindSpore Lite model is supported, but it will be deprecated in the future. If you want to convert to MindSpore Lite model, please use converter_tool instead of The Python interface. The Model api and ModelParallelRunner api only support MindSpore model.

Note

Please construct the Converter class first, and then generate the model by executing the Converter.convert() method.

The encryption and decryption function is only valid when it is set to MSLITE_ENABLE_MODEL_ENCRYPTION=on at compile time, and only supports Linux x86 platforms. decrypt_key and encrypt_key are string expressed in hexadecimal. For example, if encrypt_key is set as "30313233343637383939414243444546", the corresponding hexadecimal expression is (b)0123456789ABCDEF . Linux platform users can use the’ xxd ‘tool to convert the key expressed in bytes into hexadecimal expressions. It should be noted that the encryption and decryption algorithm has been updated in version 1.7, resulting in the new Python interface does not support the conversion of MindSpore Lite’s encryption exported models in version 1.6 and earlier.

Examples

>>> # testcase based on cloud inference package.
>>> import mindspore_lite as mslite
>>> converter = mslite.Converter()
>>> # The ms model may be generated only after converter.convert() is executed after the class is constructed.
>>> converter.weight_fp16 = True
>>> converter.input_shape = {"inTensor1": [1, 3, 32, 32]}
>>> converter.input_format = mslite.Format.NHWC
>>> converter.input_data_type = mslite.DataType.FLOAT32
>>> converter.output_data_type = mslite.DataType.FLOAT32
>>> converter.save_type = mslite.ModelType.MINDIR
>>> converter.decrypt_key = "30313233343637383939414243444546"
>>> converter.decrypt_mode = "AES-GCM"
>>> converter.enable_encryption = True
>>> converter.encrypt_key = "30313233343637383939414243444546"
>>> converter.infer = True
>>> converter.optimize = "general"
>>> converter.device = "Ascend"
>>> section = "common_quant_param"
>>> config_info_in = {"quant_type": "WEIGHT_QUANT"}
>>> converter.set_config_info(section, config_info_in)
>>> print(converter.get_config_info())
{'common_quant_param': {'quant_type': 'WEIGHT_QUANT'}}
>>> print(converter)
config_info: {'common_quant_param': {'quant_type': 'WEIGHT_QUANT'}},
weight_fp16: True,
input_shape: {'inTensor1': [1, 3, 32, 32]},
input_format: Format.NHWC,
input_data_type: DataType.FLOAT32,
output_data_type: DataType.FLOAT32,
save_type: ModelType.MINDIR,
decrypt_key: 30313233343637383939414243444546,
decrypt_mode: AES-GCM,
enable_encryption: True,
encrypt_key: 30313233343637383939414243444546,
infer: True,
optimize: general,
device: Ascend.
property decrypt_key

Get the key used to decrypt the encrypted MindIR file.

Returns

str, the key used to decrypt the encrypted MindIR file, expressed in hexadecimal characters. Only valid when fmk_type is FmkType.MINDIR.

property decrypt_mode

Get decryption mode for the encrypted MindIR file.

Returns

str, decryption mode for the encrypted MindIR file. Only valid when dec_key is set. Options are "AES-GCM", "AES-CBC".

property device

Get target device when converter model.

Returns

str, target device when converter model. Only valid for Ascend. The use case is when on the Ascend device, if you need to the converted model to have the ability to use Ascend backend to perform inference, you can set the parameter. If it is not set, the converted model will use CPU backend to perform inference by default. Option is "Ascend".

property device_id

Get device id of device target.

Returns

int, device id of device target.

property enable_encryption

Get the status whether to encrypt the model when exporting.

Returns

bool, whether to encrypt the model when exporting. Export encryption can protect the integrity of the model, but it will increase the initialization time at runtime.

property encrypt_key

Get the key used to encrypt the model when exporting.

Returns

str, the key used to encrypt the model when exporting, expressed in hexadecimal characters. Only support to use it when decrypt_mode is "AES-GCM", the key length is 16.

get_config_info()[source]

Get config info of converter.It is used together with set_config_info method for online converter. Please use set_config_info method before get_config_info .

Returns

dict{str: dict{str: str}}, the config info which has been set in converter.

Examples

>>> import mindspore_lite as mslite
>>> converter = mslite.Converter()
>>> section = "common_quant_param"
>>> config_info_in = {"quant_type": "WEIGHT_QUANT"}
>>> converter.set_config_info(section, config_info_in)
>>> config_info_out = converter.get_config_info()
>>> print(config_info_out)
{'common_quant_param': {'quant_type': 'WEIGHT_QUANT'}}
property infer

Get the status whether to perform pre-inference at the completion of the conversion.

Returns

bool, whether to perform pre-inference at the completion of the conversion.

property input_data_type

Get the data type of the quantization model input Tensor.

Returns

DataType, the data type of the quantization model input Tensor. It is only valid when the quantization parameters ( scale and zero point ) of the model input Tensor are available. The following 4 DataTypes are supported: DataType.FLOAT32 , DataType.INT8 , DataType.UINT8 , DataType.UNKNOWN. For details, see DataType .

  • DataType.FLOAT32: 32-bit floating-point number.

  • DataType.INT8: 8-bit integer.

  • DataType.UINT8: unsigned 8-bit integer.

  • DataType.UNKNOWN: Set the Same DataType as the model input Tensor.

property input_format

Get the input format of model.

Returns

Format, the input format of model. Only Valid for 4-dimensional input. The following 2 input formats are supported: Format.NCHW, Format.NHWC. For details, see Format .

  • Format.NCHW: Store Tensor data in the order of batch N, channel C, height H and width W.

  • Format.NHWC: Store Tensor data in the order of batch N, height H, width W and channel C.

property input_shape

Get the dimension of the model input.

Returns

dict{str, list[int]}, the dimension of the model input. The order of input dimensions is consistent with the original model. In the following scenarios, users may need to set the parameter. For example, {“inTensor1”: [1, 32, 32, 32], “inTensor2”: [1, 1, 32, 32]}. Default: None, equivalent to {}.

  • Usage 1:The input of the model to be converted is dynamic shape, but prepare to use fixed shape for inference, then set the parameter to fixed shape. After setting, when inferring on the converted model, the default input shape is the same as the parameter setting, no need to resize.

  • Usage 2: No matter whether the original input of the model to be converted is dynamic shape or not, but prepare to use fixed shape for inference, and the performance of the model is expected to be optimized as much as possible, then set the parameter to fixed shape. After setting, the model structure will be further optimized, but the converted model may lose the characteristics of dynamic shape(some operators strongly related to shape will be merged).

  • Usage 3: When using the converter function to generate code for Micro inference execution, it is recommended to set the parameter to reduce the probability of errors during deployment. When the model contains a Shape ops or the input of the model to be converted is a dynamic shape, you must set the parameter to fixed shape to support the relevant shape optimization and code generation.

property optimize

Get the status whether avoid fusion optimization.

optimize is used to set the mode of optimization during the offline conversion. If this property is set to "none", no relevant graph optimization operations will be performed during the offline conversion phase of the model, and the relevant graph optimization operations will be performed during the execution of the inference phase. The advantage of this property is that the converted model can be deployed directly to any CPU/GPU/Ascend hardware backend since it is not optimized in a specific way, while the disadvantage is that the initialization time of the model increases during inference execution. If this property is set to "general", general optimization will be performed, such as constant folding and operator fusion (the converted model only supports CPU/GPU hardware backend, not Ascend backend). If this property is set to "gpu_oriented", the general optimization and extra optimization for GPU hardware will be performed (the converted model only supports GPU hardware backend). If this property is set to "ascend_oriented", the optimization for Ascend hardware will be performed (the converted model only supports Ascend hardware backend).

For the MindSpore model, since it is already a mindir model, two approaches are suggested:

  1. Inference is performed directly without offline conversion.

2. When using offline conversion, setting optimize to "general" in CPU/GPU hardware backend (for general optimization), setting optimize to "gpu_oriented" in GPU hardware (for GPU extra optimization based on general optimization), setting optimize to "ascend_oriented" in Ascend hardware. The relevant optimization is done in the offline phase to reduce the initialization time of inference execution.

Returns

str, whether avoid fusion optimization. Options are "none" , "general" , "gpu_oriented" , "ascend_oriented". "none" means fusion optimization is not allowed. "general", "gpu_oriented" and "ascend_oriented" means fusion optimization is allowed.

property output_data_type

Get the data type of the quantization model output Tensor.

Returns

DataType, the data type of the quantization model output Tensor. It is only valid when the quantization parameters ( scale and zero point ) of the model output Tensor are available. The following 4 DataTypes are supported: DataType.FLOAT32 , DataType.INT8 , DataType.UINT8 , DataType.UNKNOWN. For details, see DataType .

  • DataType.FLOAT32: 32-bit floating-point number.

  • DataType.INT8: 8-bit integer.

  • DataType.UINT8: unsigned 8-bit integer.

  • DataType.UNKNOWN: Set the Same DataType as the model output Tensor.

property rank_id

Get rank id of device target.

Returns

int, rank id of device target.

property save_type

Get the model type needs to be export.

Returns

ModelType, the model type needs to be export. Options are ModelType.MINDIR , ModelType.MINDIR_LITE. Convert to MindSpore model is recommended. Currently, Convert to MindSpore Lite model is supported, but it will be deprecated in the future. For details, see ModelType .

set_config_info(section='', config_info=None)[source]

Set config info for Converter.It is used together with get_config_info method for online converter.

Parameters
  • section (str, optional) –

    The category of the configuration parameter. Set the individual parameters of the configfile together with config_info . For example, for section = "common_quant_param", config_info = {“quant_type”: “WEIGHT_QUANT”}. Default: "" .

    For the configuration parameters related to post training quantization, please refer to quantization .

    For the configuration parameters related to extension, please refer to extension .

    • "common_quant_param": Common quantization parameter.

    • "mixed_bit_weight_quant_param": Mixed bit weight quantization parameter.

    • "full_quant_param": Full quantization parameter.

    • "data_preprocess_param": Data preprocess quantization parameter.

    • "registry": Extension configuration parameter.

  • config_info (dict{str: str}, optional) –

    List of configuration parameters. Set the individual parameters of the configfile together with section . For example, for section = "common_quant_param", config_info = {“quant_type”: “WEIGHT_QUANT”}. Default: None, None is equivalent to {}.

    For the configuration parameters related to post training quantization, please refer to quantization .

    For the configuration parameters related to extension, please refer to extension .

Raises
  • TypeErrorsection is not a str.

  • TypeErrorconfig_info is not a dict .

  • TypeErrorconfig_info is a dict, but the keys are not str.

  • TypeErrorconfig_info is a dict, the keys are str, but the values are not str.

Examples

>>> import mindspore_lite as mslite
>>> converter = mslite.Converter()
>>> section = "common_quant_param"
>>> config_info = {"quant_type": "WEIGHT_QUANT"}
>>> converter.set_config_info(section, config_info)
property weight_fp16

Get the status whether the model will be saved as the Float16 data type.

Returns

bool, whether the model will be saved as the Float16 data type. If it is True, the const Tensor of the Float32 in the model will be saved as the Float16 data type during Converter, and the generated model size will be compressed. Then, according to Context.CPU ‘s precision_mode parameter determines the inputs’ data type to perform inference. The priority of weight_fp16 is very low. For example, if quantization is enabled, for the weight of the quantized, weight_fp16 will not take effect again. weight_fp16 only effective for the const Tensor in Float32 data type.