quantize_model_ascend

Function Usage

Quantizes the input graph structure to be quantized based on the given quantization configuration file, inserts quantization operators into the input graph structure, generates the quantization factor record file record_file, and returns the list of new quantization operators.

Prototype

calibration_graph, calibration_outputs=quantize_model_ascend(graph, outputs, config_file, record_file)

Command-Line Options

Option

Input/Return

Meaning

Restriction

graph

Input

A tf.Graph of the model to be quantized.

A tf.Graph.

Constraints: The graph must be an inference graph and cannot contain operators in training mode. For example, is_training of the FusedBatchNormV3 operator must be False. The trained weights must be loaded in the graph.

outputs

Input

List of output operators of the graph.

A list of strings.

config_file

Input

User-defined quantization configuration file, which specifies the configuration of each layer to be quantized in the tf.Graph.

A string

record_file

Input

Directory of the quantization factor record file, including the file name.

A string

calibration_graph

Returns

The quantization operator is inserted into the graph modified by the tool.

A tf.Graph.

calibration_outputs

Return

Output operator list of calibration_graph.

A list of strings.

Return Value

List of quantized layers on the network.

quantize_model_ascend performs BN fusion on the graph. If the outputs of the network model contain the BN layer and the BN layer is also fused, the output node of the network changes. For example, Conv+BN (or Conv+BiasAdd+BN) is fused into Conv+BiasAdd, and the output node equivalent to BN is BiasAdd.

Example

1
2
3
4
5
6
7
8
9
import amct_tensorflow as amct
# Build a network to be quantized.
network = build_network()

# Insert the quantization API.
calibration_graph, calibration_outputs = amct.quantize_model_ascend(
      graph=tf.get_default_graph(),
      config_file="./configs/config.json",
      record_file="./record_scale_offset.txt")