Sets the data type and format of the network inputs to float16 and NC1HWC0, respectively.
This parameter must be used in pair with INPUT_FP16_NODES. If it is set to true, the data type and format of the INPUT_FP16_NODES inputs are set to float16 and NC1HWC0, respectively.
Specifies output nodes (operators) of a network model or specifies the names of the outputs of a network model.
If the output nodes (output operator names) are not specified, the model outputs the operator information of the output layer by default. If they are specified, information of the specified operators is output.
To check the parameters of a specific layer, mark the layer as the output node. After the model is built, you can view the parameter settings of the specified operator at the end of the .om model file or the .json file converted from the .om model file.
Set this parameter to either of the following formats:
Format 1: "node_name1:0;node_name1:1;node_name2:0"
Node names (node_name) in the model. Enclose the specified output nodes in double quotation marks ("") and separate the nodes with semicolons (;). node_name must be the node name in the network model before model building. The number after the colon (:) indicates the output index. For example, node_name1:0 indicates output 1 of the node named node_name1.
Format 2: "topname1;topname2" (only Caffe is supported).
Top names of output nodes of a layer. Enclose the specified output nodes in double quotation marks ("") and separate them by semicolons (;). topname must be the name of one top among the layers of the Caffe network model before build. If multiple layers have the same topname, the name of the output layer applies.
Format 3: "output1;output2;output3" (only ONNX is supported).
This method is to specify the output names of the network model. Enclose the specified output names in double quotation marks ("") and separate the names with semicolons (;). The output must be the output of the network model.
Fusion patterns are classified into the following types:
Built-in fusion patterns:
General: common scope fusion patterns applicable to all networks. They are enabled by default and cannot be manually invalidated.
Non-general: scope fusion patterns applicable to specific networks. They are disabled by default. You can use ENABLE_SCOPE_FUSION_PASSES to enable specific patterns as required.
Custom fusion patterns:
General: They are enabled by default after being loaded and cannot be manually invalidated.
Non-general: They are disabled by default after being loaded. You can use ENABLE_SCOPE_FUSION_PASSES to enable specific patterns as required.
Enclose the specified fusion patterns in double quotation marks ("") and separate them with commas (,).
The parameter can be passed to aclgrphParseTensorFlow only.
INPUT_DATA_NAMES
Specifies the mapping between the name and index attributes of the input node in the model file. The system sets the index attributes of the corresponding input node according to the sequence of input names.
If the model uses a single input, the shape information is "input_name:n,c,h,w". The specified node must be enclosed in double quotation marks.
If the model has multiple inputs, the shape information is "input_name1:n1,c1,h1,w1;input_name2:n2,c2,h2,w2". Different inputs are separated by semicolons (;). input_name must be the name of a node in the network model before conversion.
If dimension values of the input data in the original model are not fixed, the model can be converted by setting the shape range.
Setting the shape range: The shape range cannot be set for the Atlas 200/300/500 Inference Product.
When setting the INPUT_SHAPE parameter, you can set the value of the corresponding dimension to a range.
To set the shape range based on node names, the format is "input_name1:n1,c1,h1,w1;input_name2:n2,c2,h2,w2", for example, "input_name1:8~20,3,5,-1;input_name2:5,3~9,10,-1". Enclose the specified nodes in double quotation marks (""), and separate them by semicolons (;). input_name must be the node name in the network model before model conversion. As a best practice, you should set the parameter based on node names.
To set the shape range based on node indexes, the format is "n1,c1,h1,w1;n2,c2,h2,w2", for example, "8~20,3,5,-1;5,3~9,10,-1". If the node name is not specified, the nodes are sorted by the index and separated by semicolons (;). When the shape range is specified based on the index, the index attribute must be set sequentially from 0 for data nodes.
If you do not want to specify the dimension range or value, you can set it to -1, indicating that the dimension can be any value greater than or equal to 0. In this scenario, the upper limit of the value is the int64 type range. However, the value is limited by the size of the physical memory on the host and device, so you can increase the memory size to support it.
Scalar shape.
Non-dynamic profile scenario:
Shape is a scalar input, which is optional. For example, if the model has two inputs — input_name1 is a scalar with shape in the "[]" format, and input_name2 has the shape of [n2,c2,h2,w2], then the shape information of the model is "input_name1:;input_name2:n2,c2,h2,w2". The specified node must be placed in double quotation marks (""). Different inputs are separated by semicolons (;). input_name must be the node name in the network model before conversion. If the scalar input needs to be configured, leave it empty.
Configuration example:
Static shape. For example, if the input shape information of a network consists of two inputs (input_0_0 [16,32,208,208] and input_1_0 [16,64,208,208]), the configuration of INPUT_SHAPE is as follows:
Shape is a scalar input, which is optional. For example, if the model has two inputs — input_name1 is a scalar and input_name2 has the shape of [16,32,208,208], the configuration example is as follows:
In the preceding example, input_name1 is optional.
NOTE:
INPUT_SHAPE is optional. If this parameter is not set, the shape of the corresponding Data nodes is used by default. Otherwise, the passed argument is used and updated to those of the corresponding Data nodes.