save_model
Description
Inserts operators such as AscendQuant and AscendDequant based on the quantization factor record file record_file and the modified model, and saves the model as a fake_quant model that can be used for accuracy simulation in the ONNX Runtime environment and a deployment model that can be used for inference.
Constraints
- This API can be called only after batch_num forward passes are completed. Failure to do so may lead to incorrect quantization factors and thus unsatisfactory quantization result.
- This API receives only the ONNX model file returned by quantize_model.
- This API requires the input of a quantization factor record file, which is generated in the quantize_model phase and has its factor values filled in the model inference phase.
Prototype
save_model(modfied_onnx_file, record_file, save_path)
Command-Line Options
Option |
Input/Return |
Description |
Restriction |
|---|---|---|---|
modfied_onnx_file |
Input |
File name of the result ONNX model. |
A string |
record_file |
Input |
Directory of the quantization factor record file, including the file name. |
A string |
save_path |
Input |
Model save path. Must include the prefix of the model name, for example, ./quantized_model/*model. |
A string |
Returns
None
Outputs
- A fake-quantized ONNX model for accuracy simulation on ONNX Runtime with the file name containing the fake_quant keyword.
- A deployable ONNX model with the file name containing the deploy keyword. The model can be deployed on Ascend AI Processor after being converted by the ATC tool.
- (Optional) *.external files, including *deploy.external and *fakequant.external:
Only the size of the saved precision simulation model and deployed model file is available. This type of file is generated only when the file size is 2GB and is generated in the same directory as the compressed *.onnx model file. This type of file is used to store tensor data. Each tensor data is stored in a separate *.external file with the same file name as the tensor, for example, conv1.weight_deploy.external and conv1.weight_fakequant.external.
When the ATC tool is used to load the compressed *.onnx deployment model file for model conversion, the tensor data in the *.external file in the same directory is automatically read.
When quantization is performed again, the preceding files output by the API will be overwritten.
Examples
1 2 3 4 5 6 7 8 9 | import amct_pytorch as amct # Perform network inference and complete quantization during the inference. for i in batch_num: output = calibration_model(input_batch) # Insert the API call and save the quantized model as an ONNX file. amct.save_model(modfied_onnx_file="./tmp/modfied_model.onnx", record_file="./tmp/scale_offset_record.txt", save_path="./results/model") |