--output_type
Description
Sets the output data type of a network or an output node.
See Also
To specify the output type of an output node, use this option in conjunction with --out_nodes.
Argument
- FP32: recommended for classification and object detection networks
- UINT8: recommended for image super-resolution networks for better inference performance
- FP16: recommended for classification and object detection networks. It is usually used when the output of one network is used as the input of another.
- INT8
Restrictions:
After the model conversion is complete, the preceding data types are displayed as DT_FLOAT, DT_UINT8, DT_FLOAT16, or DT_INT8 in the corresponding .om model file.
If the output data type of the network is not specified during model conversion, the data type of the operator output at the output layer of the original network model is used. If the type is specified, the type specified by this option is used and the type specified by --is_output_adjust_hw_layout does not take effect.
Suggestions and Benefits
None
Example
- Output data type of a network:
--output_type=FP32
- Output data type of an output node:
For example, --output_type="node1:0:FP16;node2:0:FP32" indicates that the output data type of node1 is set to FP16 and that of node2 is set to FP32. Enclose all the nodes in double quotation marks (""), and separate the nodes with semicolons (;).
In this scenario, use this option in conjunction with --out_nodes.--model=$HOME/module/resnet50_tensorflow*.pb --framework=3 --output=$HOME/module/out/tf_resnet50 --soc_version=<soc_version> --output_type="conv1:0:FP16" --out_nodes="conv1:0"
Applicability
Dependencies and Restrictions
None