Introduction
This section describes the quantization options supported by AMCT.
Quantitative Classification
- Post-training quantization
Quantization takes two forms: Manual Quantization and Automatic Quantization based on whether the quantization configuration file is manually tuned after quantization. For details about the available quantization algorithms for PTQ, see PTQ Algorithms.
Two forms Uniform Quantization and NUQ are divided based on whether the weight data is compressed. If you find that the accuracy of the quantized model is not as expected, perform Automatic Quantization (recommended) or Manual Tuning until the accuracy meets your requirement.
- Quantization aware training
Currently, only manual quantization is supported. If you find that the accuracy of the quantized model is not as expected, perform Manual Tuning. For details about the available quantization algorithms for QAT, see QAT Algorithms.
Terminology
The terms used in the quantization process are explained as follows:
|
Terminology |
Description |
|---|---|
|
Activation quantization and weight quantization |
PTQ and QAT are further classified into activation or weight quantization based on different quantization objects. Currently, Ascend AI Processor supports symmetric/asymmetric quantization of activations, but supports only symmetric quantization of weights. These symmetric and asymmetric modes depend on whether the data center point is 0 after quantization. For details about the quantization algorithm, see Quantization Algorithm Principles.
|
|
Test dataset |
A test dataset is a dataset subset for the final test of model accuracy. |
|
Calibration |
Calibration refers to the forward inference process in PTQ. Calibration is conducted to determine the quantization factors for quantizing activations. |
|
Calibration dataset |
A calibration dataset is used during the forward inference in PTQ. The calibration dataset should contain a sufficient number of representative samples. A subset of samples from the test dataset is suggested. If the selected calibration dataset does not match your model or is not representative enough, the calculated quantization factors will show poor behavior on the complete dataset, resulting in high accuracy drop. |
|
Training dataset |
A subset of datasets, which is used to train models based on the datasets in the user training network. |
|
Quantization factors |
Quantization factors are the parameters used to quantize floating-point values into integer values. The factors include Scale and Offset. The formula for quantizing a floating point into an integer (for example, INT8) is as follows:
|
|
Scale |
The quantization factor for scaling floating points, including:
|
|
Offset |
The quantization factor for the offset, including:
|
