Sets the port number of the server where the NCS is located.
device
Sets the ID of the device used for tuning in the operating environment.
buffer_optimize
Enables buffer optimization.
compress_weight_conf
Sets the directory of the node list configuration file to be compressed, including the file name.
precision_mode
Selects the operator precision mode.
disable_reuse_memory
Enables or disables memory reuse.
enable_single_stream
Limits a model to use only one stream.
aicore_num
Sets the number of AI Cores for model compilation. This option is reserved in the current version. If you need to use this option, retain the default.
fusion_switch_file
Sets the directory (including the file name) of the fusion switch configuration file for graph fusion and UB fusion patterns. You can disable selected fusion patterns in the configuration file.
enable_small_channel
Enables small channel tuning to yield performance benefits at convolutional layers with channel size ≤ 4.
You are advised to enable this function in inference scenarios.
op_select_implmode
Selects the operator implementation mode.
In high-precision mode, Newton's Method or Taylor's Formula is used to improve operator precision (fp16). In high-performance mode, the optimal performance is implemented without affecting the network precision (fp16).
optypelist_for_implmode
Sets the implementation mode of an operator on the optype list. Currently, only the Pooling, SoftmaxV2, LRN, and ROIAlign operators are supported.
enable_scope_fusion_passes
Sets the scope fusion pattern (or scope fusion patterns separated by commas) to take effect during compilation.
op_debug_level
Enables TBE operator debug during operator compilation.
virtual_type
Indicates whether AOE can run on virtual devices generated on Ascend virtual instances.
sparsity
Enables global sparsity.
modify_mixlist
Sets the operators on the mixed precision list.
customize_dtypes
Customizes the computing precision of one or more operators during model building.
framework
Sets the framework of the original network model.
job_type
Sets the tuning mode.
compression_optimize_conf
Configures the path including the name of the model compression configuration file. This option is used to enable the model compression function specified in the configuration file to improve network performance.
init_bypass
Transparently transmits compilation options that are not detectable by the AOE tuning framework and tuning services in the modeling initialization phase. For details about the specific options to be transmitted, see Command-Line Options.