Applying Post Training Quantization
Translator: unseeme
Device
Ascend
Inference Application
Concept
Post training quantization refers to perform weights quantization or full quantization on a pre-trained model. It can reduce model size while also speed up the inference. This process does not require training. Small amounts of calibration data is needed for activations quantization.
Weights Quantization
Quantify the weights of the model, only reduce the model size. Float32 operations are still performed during inference. The lower the number of quantization bits, the greater the model compression rate, but accuracy loss is usually become relatively large.
Full Quantization
Quantify the weights and activations of the model, int operations are performed during inference. It can reduce the size of the model, increase the speed of model inference, and reduce power consumption. For scenarios that need to increase the running speed and reduce the power consumption of the model, you can use the post training full quantization. In order to calculate the quantitative parameters of the activations, the user needs to provide a calibration dataset.
Post Training Quantization Tools
Choose to use the corresponding post training quantization tool according to the hardware platform deployed for model inference.
Post Training Quantization Tools |
Quantization Method Supported |
Inference Hardware Platform Supported |
Quantization Model Deployment |
---|---|---|---|
Weights Quantization |
CPU |
||
Ascend Model Compression Tool |
Full Quantization |
Ascend 310 AI Processor |