accuracy_based_auto_calibration

Function Usage

Calibrates the input model based on the input configuration file to search for a quantization configuration that meets accuracy requirements, and outputs a fake-quantized model for accuracy simulation in the ONNX Runtime environment and a model deployable on Ascend AI Processor for inference.

Constraints

None

Prototype

accuracy_based_auto_calibration(model_file,model_evaluator,config_file,record_file,save_dir,strategy='BinarySearch',sensitivity='CosineSimilarity')

Command-line options

Option

Input/Return

Meaning

Restriction

model_file

Input

User ONNX model file in .onnx format.

A string

model_evaluator

Input

Python instance for automatic quantization calibration and accuracy evaluation.

Data type: Python instance

config_file

Input

Quantization configuration file generated by the user.

A string

record_file

Input

Quantization factor record file. The existing file (if any) in the path will be overwritten upon this API call.

A string

save_dir

Input

Model save path.

Must include the prefix of the model name, for example, ./quantized_model/*model.

A string

strategy

Input

Policy for searching for the quantization configuration that meets the accuracy requirements. The dichotomy policy is used by default.

Data type: string or Python instance

Default value: BinarySearch

sensitivity

Input

Metric used to evaluate how quantization-sensible each layer to be quantized is. By default, the cosine similarity metric is used.

Data type: string or Python instance

Default value: CosineSimilarity

Returns

None

Outputs

  • A fake-quantized model for accuracy simulation on ONNX Runtime with the file name containing the fake_quant keyword.
  • Outputs a deployable model file with the name containing the deploy keyword. The model can be deployed on Ascend AI Processor after being converted by the ATC tool.
  • A quantization factor record file (record_file).
  • A sensitivity file that records how quantization-sensible is each layer, based on which the layers to be unquantized are determined.
  • Automatic quantization rollback history file: records the information about the rollback layer.

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
import amct_onnx as amct
from amct_onnx.common.auto_calibration import AutoCalibrationEvaluatorBase

# You need to implement the AutoCalibrationEvaluator's calibration(), evaluate() and metric_eval() funcs
class AutoCalibrationEvaluator(AutoCalibrationEvaluatorBase):
    """ subclass of AutoCalibrationEvaluatorBase"""
    def __init__(self, target_loss, batch_num):
        super(AutoCalibrationEvaluator, self).__init__()
        self.target_loss = target_loss
        self.batch_num = batch_num

    def calibration(self, model_file):
        """ implement the calibration function of AutoCalibrationEvaluatorBase
            calibration() need to finish the calibration inference procedure
            so the inference batch num need to >= the batch_num pass to create_quant_config
        """
        onnx_forward(onnx_model=model_file, batch_size=32, iterations=self.batch_num)

    def evaluate(self, model_file):
        """ implement the evaluate function of AutoCalibrationEvaluatorBase
            params: model_file in .onnx
            return: the accuracy of input model on the eval dataset, or other metric which
                    can describe the 'accuracy' of model
        """
        top1, top5 = onnx_forward(onnx_model=model_file, batch_size=32, iterations=5)
        return top1

    def metric_eval(self, original_metric, new_metric):
        """ implement the metric_eval function of AutoCalibrationEvaluatorBase
            params: original_metric: the returned accuracy of evaluate() on non quantized model
                    new_metric: the returned accuracy of evaluate() on fake quant model
            return:
                   [0]: whether the accuracy loss between non quantized model and fake quant model
                        can satisfy the requirement
                   [1]: the accuracy loss between non quantized model and fake quant model
        """
        loss = original_metric - new_metric
        if loss * 100 < self.target_loss:
            return True, loss
        return False, loss
   ...
 
    config_json_file = os.path.join(TMP, 'config.json')
    skip_layers = []
    batch_num = 1
    model_file = "mobilenet_v2.onnx"
    amct.create_quant_config(
        config_file=config_json_file, model_file=model_file, skip_layers=skip_layers, batch_num=batch_num,
        activation_offset=True, config_defination=None)

    # 1. step1 create quant config json file
    scale_offset_record_file = os.path.join(TMP, 'scale_offset_record.txt')
    result_path = os.path.join(PATH, 'results/mobilenet_v2')

    # 2. step2 construct the instance of AutoCalibrationEvaluator
    evaluator = AutoCalibrationEvaluator(target_loss=0.5, batch_num=batch_num)

    # 3. step3 using the accuracy_based_auto_calibration to quantized the model
    amct.accuracy_based_auto_calibration(
        model_file=model_file,
        model_evaluator=evaluator,
        config_file=config_json_file,
        record_file=scale_offset_record_file,
        save_dir=result_path,
        strategy='BinarySearch',
        sensitivity='CosineSimilarity'
    )