Manual Tuning
If the PTQ accuracy does not meet the requirements, you can manually adjust the parameters in the config.json file. This section provides the adjustment principles and parameter description.
Tuning Workflow
If you find that the accuracy of the model quantized based on the initial config.json file generated by the create_quant_config API call is not as expected, you can tune the configuration parameters until the accuracy meets your requirement. The workflow for manually tuning the parameters in the PTQ configuration file config.json goes through the following three phases:
- Tune the amount of data used for calibration.
- Skip quantizing certain layers.
- Tune the quantization algorithm and parameters.
Specifically,
- Run quantization based on the initial config.json file generated by the create_quant_retrain_config API call. If the accuracy of the quantized model is satisfactory, stop tuning the configuration parameters. Otherwise, go to 2.
- Tweak the value of batch_num to tune the amount of data used for calibration.
batch_num controls the batch number for quantization. Tune it based on the batch size and the dataset size. Generally:
Generally, a larger value of batch_num indicates more data samples used for quantization and a smaller accuracy drop of the quantized model. However, excessive data does not necessarily improve accuracy, but certainly consumes more memory and reduces the quantization speed, hence resulting in insufficient memory, video RAM, and thread resources. An optimal tradeoff is achieved when the product of batch_num and batch_size (the number of images per batch) is 16 or 32.
- Run quantization based on the new configuration generated in 2. If the accuracy of the quantized model is satisfactory, stop tuning the configuration parameters. Otherwise, go to 4.
- Tweak the value of quant_enable to skip quantizing certain layers.
quant_enable is the quantization switch of a specified layer. The value false indicates that the layer will be skipped during quantization; true, otherwise. Removing the layer configuration can also skip the layer.
Quantizing a model can have a negative effect on accuracy. Layers sensitive to quantization will suffer from remarkable error increases once quantized and therefore should be left unquantized. Spot these layers as follows:
- In a model, the input layer, output layer, and layers with especially fewer parameters are likely to be quantization-sensitive.
- Use the Model Accuracy Analyzer to compare the output errors between the source model and the quantized model layer-wise (a cosine similarity of at least 0.99, for example) to locate the layers that reduce accuracy the most.
- Run quantization based on the new configuration generated in 4. If the accuracy of the quantized model is satisfactory, stop tuning the configuration parameters. Otherwise, go to 6.
- Tweak the values of activation_quant_params and weight_quant_params to tune the quantization algorithms and parameters.
For details about the algorithm parameters, see Command-Line Options. For details about the algorithm, see PTQ Algorithms.
- Run quantization based on the new configuration generated in 6. If the accuracy of the quantized model is satisfactory, stop tuning the configuration parameters. Otherwise, it indicates that your model is not suitable for quantization and the quantization configuration should be removed.

Quantization Configuration File
The following is an example of the config.json file generated by the create_quant_config API call. Keep the layer names unique in the file.
- Uniform quantization configuration file (with ifmr: IFMR algorithm for activation quantization used for activation quantization)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44
{ "version":1, "batch_num":2, "activation_offset":true, "joint_quant":false, "do_fusion":true, "skip_fusion_layers":[], "conv1":{ "quant_enable":true, "activation_quant_params":{ "max_percentile":0.999999, "min_percentile":0.999999, "search_range":[ 0.7, 1.3 ], "search_step":0.01, "act_algo":"ifmr", "asymmetric":false }, "weight_quant_params":{ "wts_algo":"arq_quantize", "channel_wise":true } }, "conv2":{ "quant_enable":true, "activation_quant_params":{ "max_percentile":0.999999, "min_percentile":0.999999, "search_range":[ 0.7, 1.3 ], "search_step":0.01, "act_algo":"ifmr", "asymmetric":false }, "weight_quant_params":{ "wts_algo":"arq_quantize", "channel_wise":false } } }
- Uniform quantization configuration file (with HFMG for Activation Quantization used for activation quantization)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
{ "version":1, "batch_num":2, "activation_offset":true, "joint_quant":false, "do_fusion":true, "skip_fusion_layers":[], "conv1":{ "quant_enable":true, "activation_quant_params":{ "act_algo":"hfmg", "num_of_bins":4096 "asymmetric":false }, "weight_quant_params":{ "wts_algo":"arq_quantize", "channel_wise":true } } }
Command-Line Options
The following tables describe the parameters in the configuration file.
Description |
Version number of the quantization configuration file |
|---|---|
Type |
int |
Value |
1 |
Command-line options |
Currently, only version 1 is available. |
Recommended Value |
1 |
Required/Optional |
This function is optional. |
Description |
Batch number for quantization |
|---|---|
"Type" |
int |
The options are as follows: |
Greater than 0 |
Command-Line Options |
Defaults to 1. You are advised to keep the calibration dataset size within 50 images. Calculate batch_num based on batch_size as follows: batch_num x batch_size = Calibration dataset size batch_size indicates the number of images per batch. |
Recommended Value |
1 |
Required/Optional |
This function is optional. |
Description |
Symmetric or asymmetric mode select for activation quantization. It is a global configuration parameter. The asymmetric parameter takes precedence over the activation_offset parameter if both of them exist in the configuration file. |
|---|---|
Type |
bool |
Value |
true or false |
Command-line options |
|
Recommended |
true |
Required/Optional |
This function is optional. |
Description |
Eltwise joint quantization switch |
|---|---|
Type |
bool |
Value |
true or false |
Command-Line Options |
|
Recommended Value |
false |
Required/Optional |
This function is optional. |
Description |
Fusion switch |
|---|---|
Specification |
bool |
Value |
true or false |
Command-Line Options |
For the fusible layers and fusion patterns, see Fusion Support. |
Recommended |
true |
Required/Optional |
Optional |
Description |
Layers to skip fusion |
|---|---|
Type |
string |
Value |
Must be names of fusible layers. For the fusible layers and fusion patterns, see Fusion Support. |
Command-Line Options |
Sets the layers to skip fusion. |
Recommended Value |
- |
Required/Optional |
Optional |
Description |
Quantization configuration of a network layer |
|---|---|
Type |
object |
Value |
- |
Parameters description: |
Includes the following parameters:
|
Recommended Configuration |
- |
Required/Optional |
This function is optional. |
Description |
Quantization enable |
|---|---|
"Type" |
bool |
The options are as follows: |
true or false |
Command-Line Options |
|
Recommended Configuration |
true |
Required/Optional |
This function is optional. |
Description |
Activation quantization parameters |
|---|---|
"Type" |
object |
The options are as follows: |
- |
Parameters description: |
Includes the following parameters. (Beware that IFMR algorithm parameters are mutually exclusive with HFMG ones at the same layer.)
|
Recommended Configuration |
- |
Required/Optional |
This function is optional. |
Description |
Weight quantization parameters |
|---|---|
Type |
object |
The options are as follows: |
- |
Parameters description: |
|
Recommended Configuration |
- |
Required/Optional |
Optional |
Description |
Activation quantization algorithm |
|---|---|
"Type" |
string |
The options are as follows: |
ifmr or hfmg |
Parameters description: |
ifmr: IFMR algorithm for activation quantization hfmg: HFMG algorithm for activation quantization |
Recommended Configuration |
- |
Required/Optional |
Optional |
Description |
Symmetric quantization or asymmetric quantization select for activation quantization. It is used to select the layer-wise quantization algorithm. The asymmetric parameter takes precedence over the activation_offset parameter if both of them exist in the configuration file. |
|---|---|
Specification |
bool |
Value |
true or false |
Command-Line Options |
|
Recommended |
true |
Required/Optional |
This function is optional. |
Description |
Upper bound for searching for the largest of the IFMR activation quantization algorithm. |
|---|---|
"Type" |
float |
The options are as follows: |
(0.5, 1] |
Command-line options |
For example, given 100 numeric values in descending order, the upper bound 1.0 indicates that the value indexed 0 (100 – 100 x 1.0) is considered as the largest. A larger value indicates that the upper bound for clipping-based quantization is closer to the maximum value of the data to be quantized. |
Recommended Configuration |
0.999999 |
Required/Optional |
This function is optional. |
Description |
Lower bound for searching for the smallest of the IFMR activation quantization algorithm. |
|---|---|
"Type" |
float |
The options are as follows: |
(0.5, 1] |
Command-Line Options |
For example, given 100 numeric values in ascending order, the lower bound 1.0 indicates that the value indexed 0 (100 – 100 x 1.0) is considered as the smallest. A larger value indicates that the lower bound for clipping-based quantization is closer to the minimum value of the data to be quantized. |
Recommended Configuration |
0.999999 |
Required/Optional |
This function is optional. |
Description |
Quantization factor search range ([search_range_start, search_range_end]) of the IFMR algorithm for activation quantization |
|---|---|
"Type" |
A list of two floats |
The options are as follows: |
0<search_range_start<search_range_end |
Parameters description: |
Sets the quantization factor search range.
|
Recommended Configuration |
[0.7,1.3] |
Required/Optional |
Optional |
Description |
Quantization factor search step of the IFMR algorithm for activation quantization |
|---|---|
Specification |
float |
Value |
(0, (search_range_end-search_range_start)] |
Command-Line Options |
Sets the fluctuation step of the upper bound for clipping-based quantization. A smaller value indicates a smaller quantization factor search step. The number of search iterations is calculated as: search_iteration = (search_range_end – search_range_start)/search_step. Increasing the number of search iterations will increase the search time and lead to process suspension. |
Recommended Configuration |
0.01 |
Required/Optional |
This function is optional. |
Description |
Number of bins (the minimum unit in a histogram) of the HFMG algorithm for activation quantization |
|---|---|
"Type" |
unsigned int |
The options are as follows: |
{1024, 2048, 4096, 8192} |
Parameters description: |
A larger value of num_of_bins leads to better distribution fitting of the histogram and better quantization effect, but it also incurs longer PTQ time. |
Recommended Configuration |
4096 |
Mandatory or Optional |
Optional for quantization using the HFMG algorithm. |
Description |
Weight quantization algorithm |
|---|---|
"Type" |
string |
The options are as follows: |
arq_quantize |
Parameters description: |
arq_quantize: ARQ algorithm for weight quantization |
Recommended Configuration |
- |
Required/Optional |
This function is optional. |
Description |
Whether to use different quantization factors for each channel in the arq_quantize algorithm. |
|---|---|
"Type" |
bool |
The options are as follows: |
true or false |
Parameter Description |
|
Recommended Configuration |
true |
Required/Optional |
Optional |