Overview

This section describes the quantization options supported by AMCT.

PTQ takes two forms: Manual Quantization and automatic quantization based on whether the quantization configuration file is manually tuned after quantization. Currently, only manual quantization is supported. For details about the available quantization algorithms for PTQ, see PTQ Algorithms.

If the accuracy of the quantized model is not as expected, perform Manual Tuning. The terms used in the quantization process are explained as follows.

Table 1 Related Concepts in the Quantization Process

Terminology

Description

Activation quantization and weight quantization

PTQ is further classified into activation quantization and weight quantization based on different quantization objects.

Currently, Ascend AI Processor supports symmetric/asymmetric quantization of activations, but supports only symmetric quantization of weights. These symmetric and asymmetric modes depend on whether the data center point is 0 after quantization. For details about the quantization algorithm, see Quantization Algorithm Principles.

  • Activation Quantization

    Activation quantization refers to quantizing activations to lower-bit representations according to the distribution of activation values. The activation volume is often high and the value distribution at each layer is not determinable until the forward pass (either inference or training). Therefore, activation quantization is inference-time or training-time based.

    In PTQ, the activation quantization is conducted online. Your inference model is inserted with a bypass quantization node at the layer to be quantized to collect activations input to the layer, and then the quantization factors scale and offset are obtained through calibration. This process leverages a subset of samples from the dataset, improving the efficiency.

  • Weight quantization

    Weight quantization refers to quantizing weights to lower-bit representations according to the distribution of weight values.

    In PTQ, the weight quantization is conducted offline. Weights are read directly from your inference model, and are quantized using the quantization algorithm. After that, the quantized weights are written back to your model for activation quantization.

Test dataset

A test dataset is a dataset subset for the final test of model accuracy.

Calibration

Calibration refers to the forward inference process in PTQ. Calibration is conducted to determine the quantization factors for quantizing activations.

Calibration dataset

A calibration dataset is used during the forward inference in PTQ. The calibration dataset should contain a sufficient number of representative samples. A subset of samples from the test dataset is suggested. If the selected calibration dataset does not match your model or is not representative enough, the calculated quantization factors will show poor behavior on the complete dataset, resulting in high accuracy drop.

Quantization factors

Quantization factors are the parameters used to quantize floating-point values into integer values. The factors include Scale and Offset.

The formula for quantizing a floating point into an integer (for example, INT8) is as follows:

Scale

The quantization factor for scaling floating points, including:

  • scale_d for activation quantization. Only unified activation quantization is supported.
  • scale_w for weight quantization. If a scalar is set, the weights of the current layer are quantized in a unified manner; if a vector is set, the weights of the current layer are quantized channel-wise. For more details, see Record Files.

Offset

The quantization factor for the offset, including:

  • offset_d for activation quantization. Only unified activation quantization is supported.
  • offset_w for weight quantization. If a scalar is set, the weights of the current layer are quantized in a unified manner; if a vector is set, the weights of the current layer are quantized channel-wise. The dimensions of offset_w must be consistent with those of scale_w. For more details, see Record Files.