save_model
Description
Inserts operators such as AscendQuant and AscendDequant into the modified graph, and outputs a fake-quantized model for accuracy simulation in the Caffe environment and a deployable model on Ascend AI Processor for inference.
Restrictions
- This API can be called only after batch_num forward passes are completed. Failure to do so may lead to incorrect quantization factors and thus unsatisfactory quantization result.
- Due to data type conversion, the quantization factors (scale and offset) in the generated quantization configuration file might be different from those in the simplified configuration file. However, the accuracy is not affected.
- scale_offset_record_file must contain the quantization factors of all quantization layers. Otherwise, an error is reported. That is, modified_model_file and modified_weights_file in quantize_model must complete batch_num forward passes in the Caffe environment.
Prototype
save_model(graph, save_type, save_path)
Command-Line Options
Option |
Input/Return |
Description |
Restriction |
|---|---|---|---|
graph |
Input |
Graph structure modified by calling quantize_model. |
An AMCT-defined Graph. |
save_type |
Input |
Type of the model to be saved.
|
A string |
save_path |
Input |
Model save path. Must include the prefix of the model name, for example, ./quantized_model/*model. |
A string |
Returns
None
Outputs
- A fake-quantized model for accuracy simulation in the Caffe environment and its weight file, with names containing the fake_quant keyword.
- A deployable model and its weight file, with names containing the deploy keyword. The model can be deployed on Ascend AI Processor after being converted by the ATC tool.
- A quantization information file that records the locations of the quantization layers inserted by AMCT and operator fusion information, used for accuracy analysis of the quantized model.
When quantization is performed again, the preceding files output by the API will be overwritten.
Examples
1 2 3 4 5 6 7 8 9 | from amct_caffe import save_model # In the Caffe environment, perform batch_num forward passes on the modified model for quantization. run_caffe_model(modified_model_file, modified_weights_file, batch_num) # Insert this API, and save the quantized model to a .prototxt model file and a .caffemodel weight file. The following files can be found in the ./quantized_model folder: model_fake_quant_model.prototxt, model_fake_quant_weights.caffemodel, model_deploy_model.prototxt, model_deploy_weights.caffemodel, and model_quant.json. save_model(graph=graph, save_type="Both", save_path="./quantized_model/model") |
Parent topic: PTQ APIs