Command-Line Options
Before using the ATC tool to convert a model, check the tool restrictions and use the parameter overview function provided in this section to quickly preview related parameters.
General Restrictions
Before model conversion, pay attention to the following restrictions:
- To convert a network model such as Faster R-CNN to an offline model supported by the Ascend AI Processor, you must modify the model file (.prototxt). For details, see Custom Caffe Network Modification.
- Model conversion is supported for Caffe, TensorFlow, MindSpore, and ONNX frameworks.
- For MindSpore, Caffe, and ONNX, the input data type is FP32, FP16, or UINT8 (implemented by configuring --insert_op_conf for data preprocessing).
- If the original framework type is TensorFlow, the input data type can be FP16, FP32, UINT8, INT32, INT64, or BOOL.
- For a Caffe model, op name and op type in the model file (.prototxt) and weight file (.caffemodel) must be consistent (case-sensitive).
- For a TensorFlow model, only the FrozenGraphDef format is supported.
- For a Caffe model, the input data is up to 4D and operators such as reshape and expanddim do not support 5D output.
- dim! = 0 must be true for the inputs and outputs at all layers except const in a model.
- Only the operators in CANN Operator Specifications are supported, and the operator restrictions must be met.
- Due to software restrictions (the input data cannot be of DT_INT8 type in the dynamic shape scenario), when the ATC tool is used to convert the quantized deployable model, parameters related to the dynamic shape cannot be used, such as --dynamic_batch_size and --dynamic_image_size. Otherwise, the model conversion fails.
- When the ATC tool is used to convert a deployable model quantized by the AMCT tool, the high-precision feature cannot be used. For example, force_fp32 or must_keep_origin_dtype (fp32 input of the original graph) cannot be configured through --precision_mode, origin cannot be configured through --precision_mode_v2, and high_precision cannot be configured through --op_precision_mode. Setting quantization parameters in high-precision mode does not provide any performance benefits of quantization nor that of the high precision mode.
Command-Line Options
- If options queried with the atc --help command are not described in Table 1, they are reserved or applicable to other SoC versions. You do not need to pay attention to such options.
- You can use either of the following atc commands to convert a model as required:
- atc param1=value1 param2=value2 ... (No space is allowed before value. Otherwise, it will be clipped, and the value of param is empty.)
- atc param1 value1 param2 value2 ...
- Whether an option is required depends on the --mode argument (0 or 3).
- ATC options can be prefixed with -- (for example, --help) or - (for example, -help). When - is used as the prefix, it will be automatically converted to -- during running of atc commands. This document uses the prefix --.
- In an ATC option, you can use an underscore (_) or hyphen (-) to connect two strings (for example, soc_version or soc-version). This document uses underscores.
|
Option |
Description |
Required |
Default |
|---|---|---|---|
|
Displays help information. |
No |
N/A |
|
|
Sets the work mode. |
No |
0 |
|
|
Sets the model file directory, including the file name. |
Yes |
N/A |
|
|
Sets the weight file directory, including the file name. |
No |
N/A |
|
|
Sets the directory (including the file name) of the offline model or original model file to be converted to JSON format. |
No |
N/A |
|
|
Sets the framework of the original model. |
Yes |
N/A |
|
|
Sets the input format. |
No |
For MindSpore, Caffe, and ONNX: NCHW; For TensorFlow: NHWC |
|
|
Sets the input shape. |
No |
N/A |
|
|
Sets the input shape range. This option is deprecated. Avoid using it. |
No |
N/A |
|
|
Sets dynamic batch size profiles. Applies to the scenario where image count per inference batch is unfixed. |
No |
N/A |
|
|
Sets dynamic image size profiles. Applies to the scenario where the resolution of images input for inference is unfixed. |
No |
N/A |
|
|
Sets dynamic dimension profiles in ND format. Applies to the scenario where the dimensions for inference are unfixed. |
No |
N/A |
|
|
Sets the single-operator definition file (JSON), which will be converted to an offline model supported by the Ascend AI Processor. |
No |
N/A |
|
|
Sets model build for distributed deployment. After this option is enabled, the generated offline model is used for distributed deployment. |
No |
0 |
|
|
Specifies the logical topology configuration file of the target operating environment. |
No |
N/A |
|
|
Automatically partitions the original model. |
No |
0 |
|
|
Sets the path of the partitioning policy configuration file during partitioning of the original model. |
No |
N/A |
|
|
Sets the path where each slice model file is stored. |
No |
N/A |
|
|
Sets the configuration file that expresses data associations and distributed communication group relationships between multiple slice models. |
No |
N/A |
|
|
Yes |
N/A |
|
|
Sets the output data type of a network or an output node. |
No |
N/A |
|
|
Sets the precheck result file directory, including the file name. |
No |
check_result.json generated in the current path where the atc command is executed. |
|
|
Sets the directory (including the file name) of the JSON file converted from the offline model or original model file. |
No |
N/A |
|
|
If the OS and its architecture of the model build environment are inconsistent with those of the model operating environment, set this parameter to the OS type of the model operating environment. |
No |
Run the atc --help command to view the default value of the --host_env_os option or view the value in the ${INSTALL_DIR}/opp/scene.info file. |
|
|
If the OS and its architecture of the model build environment are inconsistent with those of the model operating environment, set this option to the OS architecture of the model operating environment. |
No |
Run the atc --help command to view the default value of the --host_env_cpu option or view the value in the ${INSTALL_DIR}/opp/scene.info file. |
|
|
Sets the target SoC version. |
Yes |
N/A |
|
|
Sets the number of AI Cores for model build. |
No |
The maximum |
|
|
Allows an offline model to run on a virtual device generated by the computing power group. |
No |
0 |
|
|
Sets the output nodes. |
No |
N/A |
|
|
Sets the name of the FP16 input node. |
No |
N/A |
|
|
Sets the configuration file directory (including the file name) of an operator to be inserted, for example, the AIPP operator for data preprocessing. |
No |
N/A |
|
|
Saves the weights of the Const/Constant nodes on the network in a separate file and converts the weights to FileConstant when the OM model file is generated. |
No |
0 |
|
|
Sets the directory (including the file name) of the mapping configuration file of a custom operator. The function of a custom operator varies according to the network. You can specify the mapping between the custom operator and the actual custom operator running on the network. |
No |
N/A |
|
|
Sets the data type and format of the network inputs to FP16 and NC1HWC0, respectively. |
No |
false |
|
|
Sets the data type and format of the network outputs to FP16 and NC1HWC0, respectively. |
No |
false |
|
|
Enables memory reuse. |
No |
0 |
|
|
Sets the fusion switch configuration file directory, including the file name. |
No |
N/A |
|
|
Enables specific fusion patterns at build time. |
No |
N/A |
|
|
Limits a model to use only one stream. |
No |
false |
|
|
Enables small channel optimization. If it is enabled, performance benefits are generated at the first convolutional layer with channel <= 4. |
No |
0 |
|
|
Allows the AI CPU operator and AI Core operator to run in parallel in the dynamic shape graph. |
No |
0 |
|
|
Collects the dump data of the quantization operator. |
No |
0 |
|
|
Configures the path including the name of the compression optimization configuration file. |
No |
N/A |
|
|
Enables buffer optimization. |
No |
l2_optimize |
|
|
Sets the directory of the custom repository generated after Auto Tune tuning. |
No |
${HOME}/Ascend/latest/data/aoe/custom/graph/<soc_version> |
|
|
Specifies the optimization level for graph build. |
No |
O3 |
|
|
Enables constant folding optimization. |
No |
true |
|
|
Enables dead-edge elimination optimization. |
No |
true |
|
|
Specifies the traversal mode when you compile operators in graph mode. |
No |
1 |
|
|
Sets the precision mode of a model. |
No |
force_fp16 |
|
|
Sets the precision mode of a model. |
No |
fp16 |
|
|
Sets the precision mode of an operator. You can use this option to set different precision modes for different operators. |
No |
N/A |
|
|
Sets the operators on the mixed precision list. |
No |
N/A |
|
|
Sets the operator implementation mode. |
No |
high_performance |
|
|
Sets the implementation mode of an operator on the optype list. The operator implementation mode can be either high_precision or high_performance. |
No |
N/A |
|
|
Keeps the computation precision of some operators unchanged during the build of the original network model. |
No |
N/A |
|
|
Customizes operator computing accuracy during model build. |
No |
N/A |
|
|
Crops the weight data of the floating-point type. |
No |
1 |
|
|
Sets the path of the custom repository generated after AOE tuning. |
No |
Default custom repository path: $HOME/Ascend/latest/data/aoe/custom/op |
|
|
Dumps a .json file with shape information. |
No |
0 |
|
|
Sets the level of logs to be printed during ATC model conversion. |
No |
null |
|
|
Queries model information at build time or information about an existing offline model. The information includes model's usage of key resources, information about the build environment and operating environment, and more. |
No |
0 |
|
|
Sets the disk cache mode for operator build. |
No |
disable |
|
|
Sets the disk cache directory for operator build. |
No |
$HOME/atc_data |
|
|
Enables or disables TBE operator debug at operator build time. |
No |
0 |
|
|
Sets the path (including the file name) of the configuration file for enabling global memory (DDR) detection. |
No |
N/A |
|
|
Sets the path for storing debugging information files generated after operator build during model conversion and network migration, including the .o, .json, and .cce files. |
No |
./kernel_meta |
|
|
Collectively cleans up the memory occupied by all operators with the memset attribute (memset operators) on the network. |
No |
0 |
|
|
Enables or disables deterministic computing. |
No |
0 |
|
|
Adds overflow/underflow detection logic during operator build. |
No |
0 |
|
|
Generates the result file fusion_result.json of operator fusion information (including graph fusion and UB fusion) during graph build. |
No |
1 |
|
|
Sets the shape build mode during graph build. Do not use this option because it will be deprecated in later versions. |
No |
shape_precise |