Overview
This section describes the quantization options supported by AMCT.
PTQ takes two forms: Manual Quantization and automatic quantization based on whether the quantization configuration file is manually tuned after quantization. Currently, only manual quantization is supported. For details about the available quantization algorithms for PTQ, see PTQ Algorithms.
If the accuracy of the quantized model is not as expected, perform Manual Tuning. The terms used in the quantization process are explained as follows.
|
Terminology |
Description |
|---|---|
|
Activation quantization and weight quantization |
PTQ is further classified into activation quantization and weight quantization based on different quantization objects. Currently, Ascend AI Processor supports symmetric/asymmetric quantization of activations, but supports only symmetric quantization of weights. These symmetric and asymmetric modes depend on whether the data center point is 0 after quantization. For details about the quantization algorithm, see Quantization Algorithm Principles.
|
|
Test dataset |
A test dataset is a dataset subset for the final test of model accuracy. |
|
Calibration |
Calibration refers to the forward inference process in PTQ. Calibration is conducted to determine the quantization factors for quantizing activations. |
|
Calibration dataset |
A calibration dataset is used during the forward inference in PTQ. The calibration dataset should contain a sufficient number of representative samples. A subset of samples from the test dataset is suggested. If the selected calibration dataset does not match your model or is not representative enough, the calculated quantization factors will show poor behavior on the complete dataset, resulting in high accuracy drop. |
|
Quantization factors |
Quantization factors are the parameters used to quantize floating-point values into integer values. The factors include Scale and Offset. The formula for quantizing a floating point into an integer (for example, INT8) is as follows:
|
|
Scale |
The quantization factor for scaling floating points, including:
|
|
Offset |
The quantization factor for the offset, including:
|
