Command-Line Options

Before using the ATC tool to convert a model, check the tool restrictions and use the parameter overview function provided in this section to quickly preview related parameters.

General Restrictions

Before model conversion, pay attention to the following restrictions:

  • To convert a network model such as Faster R-CNN to an offline model supported by the Ascend AI Processor, you must modify the model file (.prototxt). For details, see Custom Caffe Network Modification.
  • Model conversion is supported for Caffe, TensorFlow, MindSpore, and ONNX frameworks.
    • For MindSpore, Caffe, and ONNX, the input data type is FP32, FP16, or UINT8 (implemented by configuring --insert_op_conf for data preprocessing).
    • If the original framework type is TensorFlow, the input data type can be FP16, FP32, UINT8, INT32, INT64, or BOOL.
  • For a Caffe model, op name and op type in the model file (.prototxt) and weight file (.caffemodel) must be consistent (case-sensitive).
  • For a TensorFlow model, only the FrozenGraphDef format is supported.
  • For a Caffe model, the input data is up to 4D and operators such as reshape and expanddim do not support 5D output.
  • dim! = 0 must be true for the inputs and outputs at all layers except const in a model.
  • Only the operators in CANN Operator Specifications are supported, and the operator restrictions must be met.
  • Due to software restrictions (the input data cannot be of DT_INT8 type in the dynamic shape scenario), when the ATC tool is used to convert the quantized deployable model, parameters related to the dynamic shape cannot be used, such as --dynamic_batch_size and --dynamic_image_size. Otherwise, the model conversion fails.
  • When the ATC tool is used to convert a deployable model quantized by the AMCT tool, the high-precision feature cannot be used. For example, force_fp32 or must_keep_origin_dtype (fp32 input of the original graph) cannot be configured through --precision_mode, origin cannot be configured through --precision_mode_v2, and high_precision cannot be configured through --op_precision_mode. Setting quantization parameters in high-precision mode does not provide any performance benefits of quantization nor that of the high precision mode.

Command-Line Options

  • If options queried with the atc --help command are not described in Table 1, they are reserved or applicable to other SoC versions. You do not need to pay attention to such options.
  • You can use either of the following atc commands to convert a model as required:
    • atc param1=value1 param2=value2 ... (No space is allowed before value. Otherwise, it will be clipped, and the value of param is empty.)
    • atc param1 value1 param2 value2 ...
  • Whether an option is required depends on the --mode argument (0 or 3).
  • ATC options can be prefixed with -- (for example, --help) or - (for example, -help). When - is used as the prefix, it will be automatically converted to -- during running of atc commands. This document uses the prefix --.
  • In an ATC option, you can use an underscore (_) or hyphen (-) to connect two strings (for example, soc_version or soc-version). This document uses underscores.
Table 1 ATC command-line options

Option

Description

Required

Default

--help or --h

Displays help information.

No

N/A

--mode

Sets the work mode.

No

0

--model

Sets the model file directory, including the file name.

Yes

N/A

--weight

Sets the weight file directory, including the file name.

No

N/A

--om

Sets the directory (including the file name) of the offline model or original model file to be converted to JSON format.

No

N/A

--framework

Sets the framework of the original model.

Yes

N/A

--input_format

Sets the input format.

No

For MindSpore, Caffe, and ONNX: NCHW; For TensorFlow: NHWC

--input_shape

Sets the input shape.

No

N/A

--input_shape_range

Sets the input shape range.

This option is deprecated. Avoid using it.

No

N/A

--dynamic_batch_size

Sets dynamic batch size profiles. Applies to the scenario where image count per inference batch is unfixed.

No

N/A

--dynamic_image_size

Sets dynamic image size profiles. Applies to the scenario where the resolution of images input for inference is unfixed.

No

N/A

--dynamic_dims

Sets dynamic dimension profiles in ND format. Applies to the scenario where the dimensions for inference are unfixed.

No

N/A

--singleop

Sets the single-operator definition file (JSON), which will be converted to an offline model supported by the Ascend AI Processor.

No

N/A

--distributed_cluster_build

Sets model build for distributed deployment. After this option is enabled, the generated offline model is used for distributed deployment.

No

0

--cluster_config

Specifies the logical topology configuration file of the target operating environment.

No

N/A

--enable_graph_parallel

Automatically partitions the original model.

No

0

--graph_parallel_option_path

Sets the path of the partitioning policy configuration file during partitioning of the original model.

No

N/A

--shard_model_dir

Sets the path where each slice model file is stored.

No

N/A

--model_relation_config

Sets the configuration file that expresses data associations and distributed communication group relationships between multiple slice models.

No

N/A

--output

  • Sets the directory (including the file name) of the offline model converted from a network from open-source framework.
  • Sets the directory of the single-operator model converted from a single-operator description file.

Yes

N/A

--output_type

Sets the output data type of a network or an output node.

No

N/A

--check_report

Sets the precheck result file directory, including the file name.

No

check_result.json generated in the current path where the atc command is executed.

--json

Sets the directory (including the file name) of the JSON file converted from the offline model or original model file.

No

N/A

--host_env_os

If the OS and its architecture of the model build environment are inconsistent with those of the model operating environment, set this parameter to the OS type of the model operating environment.

No

Run the atc --help command to view the default value of the --host_env_os option or view the value in the ${INSTALL_DIR}/opp/scene.info file.

--host_env_cpu

If the OS and its architecture of the model build environment are inconsistent with those of the model operating environment, set this option to the OS architecture of the model operating environment.

No

Run the atc --help command to view the default value of the --host_env_cpu option or view the value in the ${INSTALL_DIR}/opp/scene.info file.

--soc_version

Sets the target SoC version.

Yes

N/A

--aicore_num

Sets the number of AI Cores for model build.

No

The maximum

--virtual_type

Allows an offline model to run on a virtual device generated by the computing power group.

No

0

--out_nodes

Sets the output nodes.

No

N/A

--input_fp16_nodes

Sets the name of the FP16 input node.

No

N/A

--insert_op_conf

Sets the configuration file directory (including the file name) of an operator to be inserted, for example, the AIPP operator for data preprocessing.

No

N/A

--external_weight

Saves the weights of the Const/Constant nodes on the network in a separate file and converts the weights to FileConstant when the OM model file is generated.

No

0

--op_name_map

Sets the directory (including the file name) of the mapping configuration file of a custom operator. The function of a custom operator varies according to the network. You can specify the mapping between the custom operator and the actual custom operator running on the network.

No

N/A

--is_input_adjust_hw_layout

Sets the data type and format of the network inputs to FP16 and NC1HWC0, respectively.

No

false

--is_output_adjust_hw_layout

Sets the data type and format of the network outputs to FP16 and NC1HWC0, respectively.

No

false

--disable_reuse_memory

Enables memory reuse.

No

0

--fusion_switch_file

Sets the fusion switch configuration file directory, including the file name.

No

N/A

--enable_scope_fusion_passes

Enables specific fusion patterns at build time.

No

N/A

--enable_single_stream

Limits a model to use only one stream.

No

false

--enable_small_channel

Enables small channel optimization. If it is enabled, performance benefits are generated at the first convolutional layer with channel <= 4.

No

0

--ac_parallel_enable

Allows the AI CPU operator and AI Core operator to run in parallel in the dynamic shape graph.

No

0

--quant_dumpable

Collects the dump data of the quantization operator.

No

0

--compression_optimize_conf

Configures the path including the name of the compression optimization configuration file.

No

N/A

--buffer_optimize

Enables buffer optimization.

No

l2_optimize

--mdl_bank_path

Sets the directory of the custom repository generated after Auto Tune tuning.

No

${HOME}/Ascend/latest/data/aoe/custom/graph/<soc_version>

--oo_level

Specifies the optimization level for graph build.

No

O3

--oo_constant_folding

Enables constant folding optimization.

No

true

--oo_dead_code_elimination

Enables dead-edge elimination optimization.

No

true

--topo_sorting_mode

Specifies the traversal mode when you compile operators in graph mode.

No

1

--precision_mode

Sets the precision mode of a model.

No

force_fp16

--precision_mode_v2

Sets the precision mode of a model.

No

fp16

--op_precision_mode

Sets the precision mode of an operator. You can use this option to set different precision modes for different operators.

No

N/A

--modify_mixlist

Sets the operators on the mixed precision list.

No

N/A

--op_select_implmode

Sets the operator implementation mode.

No

high_performance

--optypelist_for_implmode

Sets the implementation mode of an operator on the optype list. The operator implementation mode can be either high_precision or high_performance.

No

N/A

--keep_dtype

Keeps the computation precision of some operators unchanged during the build of the original network model.

No

N/A

--customize_dtypes

Customizes operator computing accuracy during model build.

No

N/A

--is_weight_clip

Crops the weight data of the floating-point type.

No

1

--op_bank_path

Sets the path of the custom repository generated after AOE tuning.

No

Default custom repository path: $HOME/Ascend/latest/data/aoe/custom/op

--dump_mode

Dumps a .json file with shape information.

No

0

--log

Sets the level of logs to be printed during ATC model conversion.

No

null

--display_model_info

Queries model information at build time or information about an existing offline model. The information includes model's usage of key resources, information about the build environment and operating environment, and more.

No

0

--op_compiler_cache_mode

Sets the disk cache mode for operator build.

No

disable

--op_compiler_cache_dir

Sets the disk cache directory for operator build.

No

$HOME/atc_data

--op_debug_level

Enables or disables TBE operator debug at operator build time.

No

0

--op_debug_config

Sets the path (including the file name) of the configuration file for enabling global memory (DDR) detection.

No

N/A

--debug_dir

Sets the path for storing debugging information files generated after operator build during model conversion and network migration, including the .o, .json, and .cce files.

No

./kernel_meta

--atomic_clean_policy

Collectively cleans up the memory occupied by all operators with the memset attribute (memset operators) on the network.

No

0

--deterministic

Enables or disables deterministic computing.

No

0

--status_check

Adds overflow/underflow detection logic during operator build.

No

0

--export_compile_stat

Generates the result file fusion_result.json of operator fusion information (including graph fusion and UB fusion) during graph build.

No

1

--shape_generalized_build_mode

Sets the shape build mode during graph build.

Do not use this option because it will be deprecated in later versions.

No

shape_precise