Overview
This section describes the options used in the CLI scenario.
- If options queried with the amct_caffe calibration --help command are not described in Table 1, they are reserved or applicable to other SoC versions. You do not need to pay attention to such options.
- The amct_onnx command can be organized in either of the following ways:
- amct_caffe calibration param1=value1 param2=value2 ... (No space is allowed before value. Otherwise, it will be truncated, and the value of param is empty.)
- amct_xxx calibration param1 value1 param2 value2 ...
Replace xxx with the actual framework name, for example, Caffe, TensorFlow, or ONNX.
Option |
Command-Line Options |
Use framework/network |
Application Scenarios |
|---|---|---|---|
Displays help information. |
|
|
|
Sets the path (including the file name) of the model file to be quantized.
|
|
|
|
Sets the directory (including the file name) of the weight file (.caffemodel) to be quantized. This option applies only to Caffe models. |
|
|
|
Sets the name of the output tensor for the source model. |
|
|
|
Sets the model save path. |
|
|
|
Sets the shape of the model input. The shape must be the same as that of the input data after dataset processing. Required if --evaluator is not included. |
|
|
|
Sets the number of batches for the inference stage of PTQ. |
|
|
|
Sets the path (including the file name) of the simplified configuration file for PTQ. |
|
|
|
Sets the path of the .bin dataset that matches the model. Required if --evaluator is not included. |
|
|
|
Sets the input data type. Required if --evaluator is not included. |
|
|
|
Sets the Python script that is based on the Evaluator base class and contains the evaluator. This option is mutually exclusive with --input_shape, --data_dir, and --data_types. |
|
|
|
Specifies a GPU for quantization. |
|
|