Accuracy-based Automatic Quantization
Accuracy-based automatic quantization is a technique introduced to preserve accuracy, in which the model quantization configuration that yields satisfactory accuracy is automatically searched prior to PTQ.
Accuracy-based automatic quantization is similar to Manual Quantization. However, you do not need to manually tune the quantization configuration file, which reduces your optimization workload and makes quantization more efficient. For details about layers that can be quantized and their quantization restrictions, see Uniform Quantization. For the quantization example, see Additional Samples.
API Call Sequence
Figure 1 shows the API call sequence.
The workflow goes through the following steps:
- Generate a quantization configuration file by using the create_quant_config API, and then run accuracy-based automatic quantization by using the accuracy_based_auto_calibration API.
- Pass an evaluator instance to the accuracy_based_auto_calibration API call to test the accuracy of the source model.
In this process, the quantization strategy module in accuracy_based_auto_calibration is called to output the initialized quantization configuration file. The file records all layers that support quantization.
- Use the initial quantization configuration file (generated by the create_quant_config API call in 1) to run post-training quantization on the model, obtaining the accuracy of the fake-quantized model.
- Compare the accuracy results of both models. If the accuracy drop of the fake-quantized model is below the predefined limit, output the quantized model. Otherwise, perform accuracy-based automatic quantization.
- Run inference on the source ONNX model and dump the input activations of each layer.
- Use the quantization factors obtained after PTQ to build single-operator networks of quantization layers. Then, use the buffered activations to calculate the cosine similarity between the output data of each fake-quantized single-operator network and that of the source ONNX equivalent.
- Pass the cosine similarity list to the quantization strategy module in accuracy_based_auto_calibration. After certain layers have been dequantized based on the initial quantization configuration file generated in 2, the quantization strategy module will output a new quantization configuration file.
- Using the new quantization configuration file, run PTQ to obtain a new fake-quantized model.
- Analyze the accuracy of the new fake-quantized model by a call to the evaluator module in accuracy_based_auto_calibration.
- If the model accuracy is acceptable, output a fake-quantized model and a deployable model.
- If the model accuracy is unacceptable, dequantize the layer with the lowest cosine similarity, and go back to 4.c to output a new quantization configuration.
- If the model accuracy is still unsatisfactory after you have unquantized all layers, cancel quantizing the model. In this case, no quantized model is generated.
Figure 2 shows the principles of the accuracy_based_auto_calibration API.
Examples
This example demonstrates how to use AMCT to perform accuracy-based automatic quantization. In the process, define a callback function for obtaining the model inference accuracy. This user-defined callback function is important, as AMCT will filter the quantization layers based on the accuracy return.
- Take the following steps to get started. Update the sample code based on your situation.
- Tweak the arguments passed to AMCT API calls as required.
- Import the AMCT package.
1 2 3
import os import amct_onnx as amct from amct_onnx.common.auto_calibration import AutoCalibrationEvaluatorBase
- Define the following callback functions based on the source model and test dataset: calibration(), evaluate(), and metric_eval(). (Update the sample code based on your situation.)
The arguments passed to these callback functions must be consistent with those passed to the AutoCalibrationEvaluatorBase base class. where
- calibration() calibrates the model by running forward passes.
- evaluate() evaluates the model accuracy.
- metric_eval() evaluates the accuracy drop of the fake-quantized model by comparing the accuracy of the fake-quantized model and that of the source model. If the accuracy drop is below the predefined limit, True is returned; False, otherwise.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
class ModelEvaluator(AutoCalibrationEvaluatorBase): # The evaluator for model def __init__(self, *args, **kwargs): # Initialize member variables. # Set the accuracy drop limit. self.diff = expected_acc_loss pass def calibration(self, model_file): # Calibrate the model by running batch_num forward passes. pass def evaluate(self, model_file): # evaluate the input models, get the eval metric of model pass def metric_eval(self, original_metric, new_metric): # Evaluate the accuracy drop of the fake-quantized model. If the accuracy drop is below the predefined limit, True is returned; False, otherwise. loss = original_metric - new_metric if loss < self.diff: return True, loss return False, loss
- Set the directories of your model files. (Update the sample code based on your situation.)
1model_file = os.path.realpath(user_model_file)
- Call AMCT to run accuracy-based automatic quantization.
- Create a quantization configuration file.
1 2 3 4 5 6 7 8 9
config_json_file = './config.json' skip_layers = [] batch_num = 1 activation_offset = True amct.create_quant_config(config_json_file, model_file, skip_layers, batch_num, activation_offset) scale_offset_record_file = os.path.join(TMP, 'scale_offset_record.txt') result_path = os.path.join(RESULT, 'model')
- Initialize an evaluator.
1evaluator = AutoCalibrationEvaluator()
- Start automatic search for the model quantization configuration that yields satisfactory accuracy.
1 2 3 4 5 6 7 8 9
amct.accuracy_based_auto_calibration( model_file=model_file, model_evaluator=evaluator, config_file=config_json_file, record_file=scale_offset_record_file, save_dir=result_path, strategy='BinarySearch', sensitivity='CosineSimilarity' )
- Create a quantization configuration file.

