Creating MindSpore Lite Models
Linux
Environment Preparation
Model Export
Model Converting
Intermediate
Expert
Overview
Creating your MindSpore Lite(Train on Device) model is a two step procedure:
In the first step the model is defined and the layers that should be trained must be declared. This is being done on the server, using a MindSpore-based Python code. The model is then exported into a protobuf format, which is called MINDIR.
In the seconde step this
.mindir
model is converted into a.ms
format that can be loaded onto an embedded device and can be trained using the MindSpore Lite framework. The converted.ms
models can be used for both training and inference.
Linux Environment
Environment Preparation
MindSpore Lite model transfer tool (only suppot Linux OS) has provided multiple parameters. The procedure is as follows:
Add the path of dynamic library required by the conversion tool to the environment variables LD_LIBRARY_PATH.
export LD_LIBRARY_PATH=${PACKAGE_ROOT_PATH}/tools/converter/lib:${LD_LIBRARY_PATH}
${PACKAGE_ROOT_PATH} is the decompressed package path obtained by compiling or downloading.
Parameters Description
The table below shows the parameters used in the MindSpore Lite model training transfer tool.
Parameters |
required |
Parameter Description |
Value Range |
Default Value |
---|---|---|---|---|
|
no |
Prints all the help information. |
- |
- |
|
yes |
Original format of the input model. |
MINDIR |
- |
|
yes |
Path of the input model. |
- |
- |
|
yes |
Path of the output model. The suffix |
- |
- |
|
yes |
Training on Device or not |
true, false |
false |
|
No |
Sets the quantization type of the model. |
WeightQuant: this quantType is only supported while use litetraining |
- |
|
No |
Sets the quantization bitNum when quantType is set as WeightQuant, now supports 1 bit to 16 bit quantization. |
[1, 16] |
8 |
|
No |
Sets a size threshold of convolution filter when quantType is set as WeightQuant. If the size is bigger than this value, it will trigger weight quantization. |
[0, +∞) |
0 |
|
No |
Sets a channel number threshold of convolution filter when quantType is set as WeightQuant. If the number is bigger than the channel number, it will trigger weight quantization. |
[0, +∞) |
16 |
The parameter name and parameter value are separated by an equal sign (=) and no space is allowed between them.
If running the conversion command is failed, an errorcode will be output.
Example
Suppose the file to be converted is my_model.mindir
and run the following command:
./converter_lite --fmk=MINDIR --trainModel=true --modelFile=my_model.mindir --outputFile=my_model
If the command executes successfully, the model.ms
target file will be obtained and the console will print as follows:
CONVERTER RESULT SUCCESS:0
If running the conversion command is failed, an errorcode will be output.