What Do I Do If Model Quantization Fails Due to Mismatch Between the Calibration Dataset Size and the Model Input Size?
Symptom
During model conversion using ATC, the --compression_optimize_conf parameter is used to configure options related to model quantization (quantizing the weights of the model from float32 to int8). The following error message is displayed:
[ERROR] AMCT(21177,atc.bin):2023-04-14-14:43:17[utils_acl.cpp:133]input image size[1579014] is not equal to model input size[3158028] [ERROR] AMCT(21177,atc.bin):2023-04-14-14:43:17[sample_process.cpp:234]memcpy device buffer failed [ERROR] AMCT(21177,atc.bin):2023-04-14-14:43:17[sample_process.cpp:298]execute PreProcess failed [ERROR] AMCT(21177,atc.bin):2023-04-14-14:43:17[sample_process.cpp:275]ACL model infer failed. [ERROR] AMCT(21177,atc.bin):2023-04-14-14:43:17[quantize_api.cpp:240]sample process failed [ERROR] AMCT(21177,atc.bin):2023-04-14-14:43:17[quantize_api.cpp:376]Do Calibration failed. [ERROR] AMCT(21177,atc.bin):2023-04-14-14:43:20[inner_graph_calibration.cpp:78]Failed to excute InnerQuantizeGraph failed. ... ATC run failed, Please check the detail log, Try 'atc --help' for more information
Solutions
According to the error message input image size[xxxxxx] is not equal to model input size[xxxxxx], the size of the calibration dataset does not match the model input size during model quantization. The mismatch may be caused by inconsistent shape or data type between data in the calibration dataset and the model input. Therefore, check the shape and data type.
During quantization, the model input shape is configured using the input_shape parameter in the quantization configuration file. You can determine the model input data type from the website where the model is obtained or by opening the model file using third-party software.