Simplified PTQ Configuration File

Table 1 describes the fields in the calibration_config_caffe.proto file. Find the file in /amct_caffe/proto/calibration_config_caffe.proto under the AMCT installation directory.

Table 1 Parameter description of calibration_config_caffe.proto

Message

Required

Type

Parameter

Description

AMCTConfig

-

-

-

Simplified PTQ configuration of AMCT.

optional

uint32

batch_num

Batch count used for quantization.

optional

bool

activation_offset

Whether to quantize activations with offset. It is a global configuration parameter.

  • true: with offset. Activations are asymmetrically quantized.
  • false: without offset. Activations are symmetrically quantized.

optional

bool

joint_quant

Eltwise joint quantization switch. Defaults to false, indicating that joint quantization is disabled.

If true, the network performance may improve but the precision may be compromised.

repeated

string

skip_layers

Layers to skip quantization.

repeated

string

skip_layer_types

Types of layers to skip quantization.

optional

NuqConfig

nuq_config

NUQ configuration.

optional

CalibrationConfig

common_config

Common quantization configuration, which is a global parameter. Use this configuration if a layer is not overridden by override_layer_types or override_layer_configs.

Parameter priority: override_layer_configs > override_layer_types > common_config

repeated

OverrideLayerType

override_layer_types

Certain types of layers to override the quantization configurations. It is used to determine which layers are to be differentiatedly quantized.

By using this parameter, you can perform differentiated quantization on some layers to change the quantization factor search step from 0.01 to 0.02.

Parameter priority: override_layer_configs > override_layer_types > common_config

repeated

OverrideLayer

override_layer_configs

Layer to override the quantization configurations. It is used to determine which layers are to be differentiatedly quantized.

By using this parameter, you can perform differentiated quantization on some layers to change the quantization factor search step from 0.01 to 0.02.

Parameter priority: override_layer_configs > override_layer_types > common_config

optional

bool

do_fusion

BN fusion switch. Defaults to true, indicating BN fusion enabled.

repeated

string

skip_fusion_layers

Layers to skip BN fusion.

optional

CalibrationConfig

conv_calibration_config

Quantization configuration of the convolutional layer, applicable to all convolutional layers and deconvolutional layers that are not reconfigured. This method is not recommended.

optional

CalibrationConfig

fc_calibration_config

Quantization configuration of InnerProduct and AVE Pooling layers that are not overridden. This method is not recommended.

NuqConfig

-

-

-

NUQ configuration.

required

string

mapping_file

JSON file of the quantized model, which is obtained by converting the deployable model after uniform quantization into an offline model with the ATC tool.

optional

NUQuantize

nuq_quantize

NUQ configuration.

OverrideLayerType

-

-

-

Quantization configuration overriding by layer type.

required

string

layer_type

Quantizable layer type.

required

CalibrationConfig

calibration_config

Quantization configuration to apply.

OverrideLayer

-

-

-

Quantization configuration overriding by layer.

required

string

layer_name

Layer to override.

required

CalibrationConfig

calibration_config

Quantization configuration to override.

CalibrationConfig

-

-

-

Calibration-based quantization configuration.

-

ARQuantize

arq_quantize

Weight quantization algorithm.

arq_quantize: ARQ algorithm configuration.

-

NUQuantize

nuq_quantize

Weight quantization algorithm.

nuq_quantize: non-uniform quantization algorithm configuration.

-

FMRQuantize

ifmr_quantize

Activation quantization algorithm.

ifmr_quantize: IFMR algorithm configuration.

-

HFMGQuantize

hfmg_quantize

Activation quantization algorithm.

hfmg_quantize: HFMG algorithm configuration.

ARQuantize

-

-

-

ARQ algorithm configuration. For details about the algorithm, see ARQ Algorithm.

optional

bool

channel_wise

Whether to use different quantization factors for each channel.

FMRQuantize

-

-

-

FMR algorithm for activation quantization. For details about the algorithm, see ifmr: IFMR algorithm for activation quantization.

This parameter is mutually exclusive with HFMGQuantize.

optional

float

search_range_start

Quantization factor search start.

optional

float

search_range_end

Quantization factor search end.

optional

float

search_step

Quantization factor search step.

optional

float

max_percentile

Upper bound for searching for the largest.

optional

float

min_percentile

Lower bound for searching for the smallest.

optional

bool

asymmetric

Whether to perform symmetric quantization. It is used to select the layer-wise quantization algorithm.

  • true: asymmetric quantization
  • false: symmetric quantization

If this parameter is set for override_layer_configs, override_layer_types, and common_config, or

if the activation_offset parameter is set, the priority is as follows:

override_layer_configs>override_layer_types>common_config>activation_offset

HFMGQuantize

-

-

-

HFMG algorithm for activation quantization. For details about the algorithm, see HFMG for Activation Quantization.

This parameter is mutually exclusive with FMRQuantize.

optional

uint32

num_of_bins

Number of bins (the minimum unit in a histogram). Value range: {1024, 2048, 4096, 8192}.

Defaults to 4096.

optional

bool

symmetric

Whether to perform symmetric quantization. It is used to select the layer-wise quantization algorithm.

  • true: asymmetric quantization
  • false: symmetric quantization

If this parameter is set for override_layer_configs, override_layer_types, and common_config, or

if the activation_offset parameter is set, the priority is as follows:

override_layer_configs>override_layer_types>common_config>activation_offset

NUQuantize

-

-

-

NUQ algorithm configuration. For details about the algorithm, see nuq_quantize: NUQ algorithm for weight quantization.

optional

uint32

num_steps

Number of steps for non-uniform quantization.

optional

uint32

num_of_iteration

Number of iterations for non-uniform quantization optimization.

  • The following is an example simplified uniform quantization configuration file (quant.cfg).
    # global quantize parameter
    batch_num : 2
    activation_offset : true
    joint_quant : false
    skip_layers : "Opname"
    skip_layer_types:"Optype"
    do_fusion: true
    skip_fusion_layers : "Opname"
    common_config : {
        arq_quantize : {
            channel_wise : true
        }
        ifmr_quantize : {
            search_range_start : 0.7
            search_range_end : 1.3
            search_step : 0.01
            max_percentile : 0.999999
            min_percentile : 0.999999
             asymmetric : true
        }
    }
     
    override_layer_types : {
        layer_type : "Optype"
        calibration_config : {
            arq_quantize : {
                channel_wise : false
            }
            ifmr_quantize : {
                search_range_start : 0.8
                search_range_end : 1.2
                search_step : 0.02
                max_percentile : 0.999999
                min_percentile : 0.999999
                  asymmetric : false
            }
        }
    }
     
    override_layer_configs : {
        layer_name : "Opname"
        calibration_config : {
            arq_quantize : {
                channel_wise : true
            }
            ifmr_quantize : {
                search_range_start : 0.8
                search_range_end : 1.2
                search_step : 0.02
                max_percentile : 0.999999
                min_percentile : 0.999999
                  asymmetric : false
            }
        }
    }

    If the HFMG algorithm is used for activation quantization, replace the lines in bold in the preceding configuration file with the following ones. (The following configuration file is only an example. Modify it as required.)

    # global quantize parameter
    activation_offset : true
    batch_num : 1
    ...
    common_config : {
        hfmg_quantize : {
            num_of_bins : 4096
             asymmetric : false
        }
    ...
    }