create_quant_retrain_model

Function Usage

Description: Runs QAT on a graph based on the config_file configuration file, inserts activation and weight fake-quantization layers, and saves the modified network to a new file.

Constraints

None

Prototype

create_quant_retrain_model(model_file, weights_file, config_file, modified_model_file, modified_weights_file, scale_offset_record_file)

Parameters

Option

Input/Return

Meaning

Restriction

model_file

Input

Definition file of the Caffe model (.prototxt).

A string

weights_file

Input

Weight file of the Caffe model (.caffemodel).

A string

config_file

Input

Quantization configuration file.

A string

modified_model_file

Input

File name of the Caffe model definition file (.prototxt) inserted into the QAT layer.

A string

modified_weights_file

Input

File name of the Caffe model weight file after quantization (.caffemodel).

A string

scale_offset_record_file

Input

Quantization factor record file.

A string

Returns

None

Outputs

  • modified_model_file: definition file of the modified model. The quantization aware training layer is inserted into the original model.
  • modified_weights_file: weight file of the modified model. The quantization aware training layer is inserted into the original model.

Examples

1
2
3
4
5
6
7
8
from amct_caffe import amct
model_file = 'resnet50_train.prototxt'
weights_file = 'ResNet-50-model.caffemodel'
modified_model_file = './tmp/modified_model.prototxt'
modified_weights_file = './tmp/modified_model.caffemodel'
config_json_file = './config.json'
scale_offset_record_file = './record.txt'
amct.create_quant_retrain_model( model_file, weights_file, config_json_file,  modified_model_file, modified_weights_file, scale_offset_record_file)