Overview

This section describes the command-line options used by the AOE. An option and its value can be separated by an equal sign (=) or a space. In this section, the equal sign (=) is used as an example.

If options queried with the aoe --help command are not described in Table 1, they are reserved or applicable to other SoC versions. You do not need to pay attention to such options.

Table 1 AOE command-line options

Option

Description

Required

Default Value

--help or -h

Displays help information.

No

N/A

--model or -m

Sets the model file directory, including the file name.

No

N/A

--weight or -w

Sets the weight file directory, including the file name.

No

N/A

--job_type or -j

Sets the tuning mode.

Yes

N/A

--framework or -f

Sets the framework of the original model.

No

N/A

--input_format

Sets the input data format.

No

NCHW (Caffe and ONNX)

NHWC (TensorFlow)

--input_shape

Sets the shape of each input.

No

N/A

--dynamic_batch_size

Sets dynamic batch size profiles. Applies to the scenario where image count per inference batch is unfixed.

No

N/A

--dynamic_image_size

Sets dynamic image size profiles. Applies to the scenario where the resolution of images input for inference is unfixed.

No

N/A

--dynamic_dims

Sets dynamic dimension profiles in ND format. Applies to the scenario where the dimensions for inference are unfixed.

No

N/A

--ip

Sets the IP address of the NCS server.

Yes

N/A

--port

Sets the port number of the NCS server.

No

8000

--reload

Reloads tuning after subgraph tuning is interrupted. After the current process is interrupted, if you want to continue tuning from the previous phase, run --reload to enter the reload mode.

No

N/A

--device

Specifies the device used for tuning in the operating environment.

No

N/A

--progress_bar

Enables or disables the function of displaying the tuning progress.

No

on

--singleop

Tunes one or more specified operators by configuring the operator description file.

No

N/A

--output

Sets the path of the tuned model, including the file name.

No

N/A

--output_type

Sets output data type of a network or an output node.

No

N/A

--host_env_os

If the OS and its architecture of the model compilation environment are inconsistent with those of the model operating environment, set this parameter to the OS type of the model operating environment.

No

N/A

--host_env_cpu

If the OS and its architecture of the model compilation environment are inconsistent with those of the model operating environment, set this parameter to the OS architecture of the model operating environment.

No

N/A

--aicore_num

Sets the number of AI Cores for model build.

No

Maximum

--out_nodes

Sets the output nodes.

No

N/A

--input_fp16_nodes

Sets the input nodes to specify as float16 nodes.

No

N/A

--insert_op_conf

Sets the path of the insertion operator configuration file, including the file name.

No

N/A

--op_name_map

Sets the path of the custom operator (non-standard operators) mapping configuration file, including the file name.

No

N/A

--is_input_adjust_hw_layout

Sets the data type and format of the network inputs to float16 and NC1HWC0, respectively.

No

false

--is_output_adjust_hw_layout

Sets the data type and format of the network outputs to float16 and NC1HWC0, respectively.

No

false

--disable_reuse_memory

Enables memory overcommitment.

No

0

--fusion_switch_file

Sets the fusion switch configuration file directory, including the file name.

No

N/A

--enable_scope_fusion_passes

Enables specific fusion patterns at build time.

No

N/A

--enable_small_channel

Enables small channel tuning to yield performance benefits at convolutional layers with channel size ≤ 4.

No

0

--compression_optimize_conf

Configures the path including the name of the model compression configuration file. This option is used to enable the model compression function specified in the configuration file to improve network performance.

No

N/A

--buffer_optimize

Enables buffer tuning.

No

l2_optimize

--precision_mode

Selects the operator precision mode.

No

force_fp16

--op_select_implmode

Selects the operator implementation mode.

No

high_performance

--optypelist_for_implmode

Lists operator optypes.

No

N/A

--op_debug_level

Enables TBE operator debug during operator build.

No

0

--log

Sets the log level during tuning.

No

N/A

--tune_ops_file

Specifies the operator name or operator type in the configuration file to tune a specified operator.

No

N/A

--op_precision_mode

Sets the precision mode of an operator. You can use this option to set different precision modes for different operators.

No

N/A

--modify_mixlist

Sets the operators on the mixed precision list.

No

N/A

--keep_dtype

Keeps the computation precision of some operators unchanged during the building of the original network model.

No

N/A

--customize_dtypes

Customizes the computing precision of one or more operators during model building.

No

N/A

--tune_optimization_level

Sets the tuning mode, including the high-performance mode and normal mode.

No

O2

--Fdeeper_opat

Sets in-depth operator tuning.

No

N/A

--Fnonhomo_split

Sets non-uniform subgraph partition tuning.

No

N/A

--Fop_format

Sets operator Format tuning.

No

N/A

--init_bypass

Transparently transmits compilation options that are not detectable by the AOE tuning framework and tuning services in the modeling initialization phase. For details about the specific options to be transmitted, see Command-Line Options.

No

N/A

--build_bypass

Transparently transmits compilation options that are not detectable by the AOE tuning framework and tuning services in the model compilation phase. For details about the specific options to be transmitted, see aclgrphBuildModel.

No

N/A