create_quant_retrain_model
Function Usage
Description: Runs QAT on a graph based on the config_file configuration file, inserts activation and weight fake-quantization layers, and saves the modified network to a new file.
Constraints
None
Prototype
retrain_ops = create_quant_retrain_model(graph, config_file, record_file)
Command-Line Options
Option |
Input/Return |
Meaning |
Restriction |
|---|---|---|---|
graph |
Input |
A tf.Graph of the model to be quantized. |
A tf.Graph. |
config_file |
Input |
User-defined QAT configuration file, which specifies the configuration of each layer to be quantized. |
A string |
record_file |
Input |
Directory of the quantization factor record file. |
A string |
retrain_ops |
Return |
List of new operator variables for quantization aware training. |
A list of strings. |
Returns
List of new layer name variables for quantization aware training.
Outputs
None
Examples
1 | retrain_ops = amct.create_quant_retrain_model(graph, config_path, record_file) |
Parent topic: QAT APIs