Overview

This section describes the options used in the CLI scenario.

  • If options queried with the amct_caffe calibration --help command are not described in Table 1, they are reserved or applicable to other SoC versions. You do not need to pay attention to such options.
  • The amct_onnx command can be organized in either of the following ways:
    • amct_caffe calibration param1=value1 param2=value2 ... (No space is allowed before value. Otherwise, it will be truncated, and the value of param is empty.)
    • amct_xxx calibration param1 value1 param2 value2 ...

Replace xxx with the actual framework name, for example, Caffe, TensorFlow, or ONNX.

Table 1 Command-line options

Option

Command-Line Options

Use framework/network

Application Scenarios

--help or --h

Displays help information.

  • Caffe
  • TensorFlow
  • ONNX
  • Post-training quantization: optional
  • QAT Model Adaptation to CANN Model: Optional

--model

Sets the path (including the file name) of the model file to be quantized.

  • Caffe: The file is in .prototxt format.
  • TensorFlow: The file is in .pb format.
  • ONNX: The file is in .onnx format.
  • Caffe
  • TensorFlow
  • ONNX
  • Post-training quantization: mandatory
  • (Mandatory) Adapting QAT Models to CANN Models

--weights

Sets the directory (including the file name) of the weight file (.caffemodel) to be quantized. This option applies only to Caffe models.

  • Caffe
  • Post-training quantization: mandatory

--outputs

Sets the name of the output tensor for the source model.

  • TensorFlow
  • Post-training quantization: mandatory
  • (Mandatory) Adapting QAT Models to CANN Models

--save_path

Sets the model save path.

  • Caffe
  • TensorFlow
  • ONNX
  • Post-training quantization: mandatory
  • (Mandatory) Adapting QAT Models to CANN Models

--input_shape

Sets the shape of the model input. The shape must be the same as that of the input data after dataset processing.

Required if --evaluator is not included.

  • Caffe
  • TensorFlow
  • ONNX
  • Post-training quantization: optional
  • QAT model adaptation to CANN model: not applicable

--batch_num

Sets the number of batches for the inference stage of PTQ.

  • Caffe
  • TensorFlow
  • ONNX
  • Post-training quantization: optional
  • QAT model adaptation to CANN model: not applicable

--calibration_config

Sets the path (including the file name) of the simplified configuration file for PTQ.

  • Caffe
  • TensorFlow
  • ONNX
  • Post-training quantization: optional
  • QAT model adaptation to CANN model: not applicable

--data_dir

Sets the path of the .bin dataset that matches the model.

Required if --evaluator is not included.

  • Caffe
  • TensorFlow
  • ONNX
  • Post-training quantization: optional
  • QAT model adaptation to CANN model: not applicable

--data_types

Sets the input data type.

Required if --evaluator is not included.

  • Caffe
  • TensorFlow
  • ONNX
  • Post-training quantization: optional
  • QAT model adaptation to CANN model: not applicable

--evaluator

Sets the Python script that is based on the Evaluator base class and contains the evaluator.

This option is mutually exclusive with --input_shape, --data_dir, and --data_types.

  • Caffe
  • TensorFlow
  • ONNX
  • Post-training quantization: optional
  • QAT model adaptation to CANN model: not applicable

--gpu_id

Specifies a GPU for quantization.

  • Caffe
  • Post-training quantization: optional