Script Conversion
Portals
- You can navigate to the Model Converter in either of the following ways:
- Choose from the menu bar.
- Choose from the menu bar, and then click
on the displayed toolbar below the menu bar.
For details, see Procedure.
- You can also run the ATC tool to convert the model as follows:
Prerequisites
The model file and weight file of the model have been uploaded to the development environment where the Ascend-CANN-Toolkit resides by the MindStudio installation user. Before model conversion, you can configure environment variables as required. For details, see Setting Global Environment Variables.
Procedure
- On the Model Information tab page in the Model Converter dialog box, select the uploaded model file and weight file, as shown in Figure 1 or Figure 2.
Table 1 describes the parameters on the Model Information tab page.
Table 1 Parameter description Parameter
Description
Remarks
CANN Machine
(This parameter is supported only by the Windows OS.)
SSH address for remotely connecting to the environment where CANN is located.
Mandatory.
The format is <username>@localhost:port_number.
Model File
Model file. The write permission of other users on the model file needs to be canceled.
Mandatory.
- When performing model conversion on Linux, click
on the right or enter the path of the source model file on the local server. - When performing model conversion on Windows, select remote path in the dialog box displayed, select the source model file from the background server path, and upload the file. Alternatively, select local path and click
on the right, select or enter the path of the source model file on the local Windows host and upload the file.
NOTE:If you get error message "Failed to get model input shape." when importing an ultra-large model, choose Help > Change Memory Settings from the menu bar. In the Memory Settings window displayed, increase the memory.
Weight File
Weight file.
Required when the original framework is Caffe.
- Windows:
The model file and weight file must be stored in the same directory. After you select a model file, the weight file is automatically selected.
- Linux:
- If the model file and weight file are placed in the same directory on the server and their file names are the same, the weight file is automatically selected after the model file is selected.
- If the model file and weight file are stored in different directories on the server or in the same directory with different file names:
- Upload the weight file manually. Click
on the right, select the weight file corresponding to the model file from the server path, and upload the weight file. - Enter the path of the .caffemodel weight file on the server in the text box, including the file name extension.
- Upload the weight file manually. Click
Model Name
Model file name.
Mandatory.
- After a model file is selected, this parameter is automatically filled. You can change the name as required.
- If a model file with the same name already exists in the output path of model conversion, a message is displayed after you click Next, asking you to replace the original file or rename the current model.
Target SoC Version
Target processor model.
Mandatory.
Set this parameter based on the processor form in the board environment.
Output Path
Model file output path.
Mandatory.
The default output path is $HOME/modelzoo/${Model Name}/${Target SoC Version}/. To customize the output path, manually enter one or click
on the right to select one. The system copies the model files in the model output path to the custom Output Path.Input Format
Input data format.
Mandatory.
This parameter can only be set to a single value.
- If the original framework is Caffe, NCHW and ND (any format with N ≤ 4) are supported. Defaults to NCHW.
- If the original framework is ONNX, NCHW, NCDHW, and ND (any dimension format with N ≤ 4) are supported. Defaults to NCHW.
- If the original framework is MindSpore, the value is NCHW.
- If the original framework is TensorFlow, NHWC, NCHW, ND, NCDHW, and NDHWC are supported. Defaults to NHWC.
Input Nodes
Model input node information.
- If a model file is parsed successfully, the shape and type information of the model input nodes will be displayed below.
- If Input Nodes cannot be parsed from the model file selected, you need to manually enter the information. Click
on the right. In the displayed dialog box, enter the name, shape, and data type of each input node. - If a model has multiple inputs, the shape and type information of each input node will be displayed below Input Nodes after the parsing is successful.
NOTE:
- If the original framework is MindSpore, Input Nodes does not automatically parse the input information in the corresponding model. You need to manually enter the information. If the information is not specified, the ATC tool automatically parses necessary information from the model when the atc command is executed in the command line.
- The model conversion tool cannot delete nodes with dynamic shapes in models.
Shape
Input shape.
The example shown in Figure 1 indicates N (number of images processed per batch), C (channel count), H (height), and W (width) of the input data, respectively. For example, the channel count of RGB images is 3. If AIPP is enabled, the values of H and W are the height and width of the AIPP output. The settings vary with the input format:
- Input Format is a format with a static shape, for example, NCHW or NCDHW:
- Set to the dynamic batch size. Applies to the scenario where the number of images processed per inference batch is unfixed.
Set N in the Shape text box to –1 and the Dynamic Batch Size text box is displayed under Shape. Enter the dynamic batch size profiles (separated by commas) in the text box. A maximum of 100 profiles are supported. Try to keep the batch size of each profile within [1, 2048]. For example, you can enter: 1,2,4,8
If the Dynamic Batch parameter is set during model conversion, you need to add a call to aclmdlSetDynamicBatchSize before the aclmdlExecute call to set the runtime batch size when running an application project for model inference. For details about how to use the aclmdlSetDynamicBatchSize API, see the "" in the Application Software Development Guide (C&C++)..
- Set to the dynamic image size profiles. Applies to the scenario where image size per inference batch is unfixed.
Set H and W in the Shape text box to –1 and the Dynamic Image Size text box is displayed under Shape. Enter at least two groups of dynamic image size profiles (separated by commas) in the text box. A maximum of 100 profiles are supported. For example, you can enter: 112,112;224,224
If the Dynamic Image Size parameter is set during model conversion, you need to add a call to aclmdlSetDynamicHWSize before the aclmdlExecute call to set the runtime image size when running an application project for model inference. For details about how to use the aclmdlSetDynamicHWSize API, see the "" in the Application Software Development Guide (C&C++).
If dynamic image size is enabled during model conversion and the data preprocessing function in Data Pre-Processing is required, the Crop and Padding functions are unavailable.
Dynamic batch size and dynamic image size are mutually exclusive.
- Set to the dynamic batch size. Applies to the scenario where the number of images processed per inference batch is unfixed.
- Set to the dynamic dimension in ND format when Input Format is ND. Applies to the scenario where any dimension is processed each time during inference.
Set the quantity and position of -1 in the Shape text box as required, and the Dynamic Dims text box is displayed under Shape. Enter 2 to 100 groups of dynamic dimension parameters in the text box. Separate the groups by semicolons (;), and parameters in a group by commas (,). The content in a group cannot be the same as that in another group. The minimum dimension in a group is 1. The parameter values in each group correspond to the parameters marked with -1 in Shape. If Shape contains several -1s, the corresponding number of parameter values in each group must be set. For example, if the input shape information is "-1,-1,-1,-1", the input Dynamic Dims parameter can be 1,224,224,3;2,448,448,6.
Type
Data type of the input node.
- If the original framework type is Caffe or ONNX, the supported data types are FP32, FP16, and UINT8.
- If the original framework type is MindSpore, the supported data types are FP32 and UINT8.
- If the original framework type is TensorFlow, the supported input data types are FP32, FP16, UINT8, Int32, Int64, and Bool.
- If the original framework is Caffe, MindSpore, or ONNX, the Data Pre-Processing tab page (for configuring the AIPP function) in 2 can be configured only when Type is set to UINT8. If a model has multiple inputs, the Data Pre-Processing tab page can be configured only when Type is set to UINT8. If the value of Type is UINT8 but the H and W information in Shape cannot be obtained, the Data Pre-Processing tab page cannot be configured.
- If the original framework is TensorFlow, the Data Pre-Processing tab page (for configuring the AIPP function) in 2 can be configured only when Type is not FP16.
- If Type is set to UINT8, the Data Pre-Processing (for configuring the AIPP function) tab is enabled by default and cannot be disabled.
- If Type is set to FP32, Int32, Int64, or Bool, the Data Pre-Processing (for configuring the AIPP function) tab is disabled by default and can be manually enabled.
If a model has multiple inputs, the Data Pre-Processing tab page can be configured only on nodes where Type is not FP16. On these nodes, if the H and W information in Shape cannot be obtained, the Data Pre-Processing tab page cannot be configured.
Output Nodes
Model output node information.
Optional.
Click Select. Right-click a layer of nodes and choose Select from the shortcut menu, turning the layer blue. Click OK. All selected operators are displayed under Output Nodes. You can right-click any unwanted operator and choose from the shortcut menu to cancel the selection.
- Op Name: operator name.
- Data Type: output data type, selected from FP32, UINT8, and FP16.
You can deselect any unwanted operators as required under Output Nodes. This section assumes that all operators at the selected layer are retained as the model output.
This function applies to the scenario where you want to check the parameters of a specific operator layer. Select the layer and deselect unwanted operators as required under Output Nodes. After model conversion, in the corresponding .om model file, the outputs of the operators at the layer are used as the outputs of the model. For details, see Model Visualization.
NOTE:- If no layer is selected or no operator is added to Output Nodes, the output of the model is the outputs of the operators at the output layer.
- If a layer is selected and one or more operators are retained under Output Nodes, the output of the retained operators is used as the output of the model.
- If a selected operator is fused during model conversion, the operator cannot be specified as the output node.
- If the selected model contains an unsupported operator, click Select. The failure log is generated in the Output window of MindStudio, specifying that the operator is not supported and its shape is not accessible. Such operator is highlighted in red in the displayed Model Visualizer. In this case, you cannot obtain the output format and shape of the operator.
- If the original framework is MindSpore, this parameter cannot be edited to specify an output node.
Load Configuration
Loads the configuration file of the last successful conversion.
Optional.
Whether a user model is successfully converted, a ${Model Name}_config.json configuration file is generated in $HOME/modelzoo/${Model Name}/${Target SoC Version}/. The configuration file records the model conversion configuration, including the model path, model name, input and output configurations, and data preprocessing configuration. You can click Load Configuration to load an existing configuration file and the corresponding configuration information will be automatically filled.
After selecting a model file in the Model File area, click
on the right. The progress bar for generating the model network structure is displayed. Then, you can view the original model network structure, on which you can perform the following operations. For details, see Description of the Model Visualization GUI (This function is not available for MindSpore models.)- View the operator details.
- View the output format and shape of an operator.
If the selected model contains an unsupported operator, the failure log is generated in the Output window of MindStudio, specifying that the operator is not supported and its shape is not accessible. Such operator is highlighted in red in the displayed Model Visualizer. In this case, you cannot obtain the output format and shape of the operator. To rectify the fault in this scenario, see Exception Handling.
- Search for an operator.
- Find operator details.
- When performing model conversion on Linux, click
- Click Next, moving on to the Data Pre-Processing tab page, as shown in Figure 3.The data preprocessing capability is backed by the AI Pre-processing (AIPP) module of the Ascend AI Processor. It enables hardware-based image preprocessing including color space conversion (CSC), image normalization (by subtracting the mean value or multiplying a factor), image cropping (by specifying the crop start and cropping the image to the size required by the neural network), and much more.
Table 2 describes the parameters.
Table 2 Parameter description Parameter
Description
Remarks
Image Pre-processing Mode
AIPP mode for image preprocessing.
- Static: static AIPP.
- Dynamic: dynamic AIPP.
-
Load Aipp Configuration
AIPP configuration loading. Dynamic AIPP does not support this function.
This function is disabled by default. After it is enabled, select the corresponding configuration file in the Aipp Configuration File text box.
After the configuration file is loaded, all parameters under Input Node: (data) and Input Node: (im_info) are automatically set based on the configuration file. You can turn on the switch on the right of Aipp Configuration File to display all parameters of Input Node: (data) and Input Node: (im_info) in the lower part of the page, and modify them as required.
If you set Image Pre-processing Mode to Static, also configure the following parameters:
Input Node: (data)
AIPP switch by node.
This parameter is automatically enabled only when Type of the data node is set to UINT8 in the Input Nodes area, as shown in Figure 1. For a TensorFlow model, this parameter can be manually enabled when Type of the data node is set to FP32, Int32, Int64, or Bool in the Input Nodes area and the model width and height can be obtained, as shown in Figure 1.
Input Node: (im_info)
Switches AIPP on for the second input node. This parameter is available only when the model has two inputs.
This parameter is automatically enabled only when Type of the im_info node is set to UINT8 in the Input Nodes area, as shown in Figure 1. Note that im_info in Input Node: (im_info) varies depending on the parsed model.
Input Image Format
Input image format.
- If set to YUV420 sp, YVU420 sp, YUV422 sp, or YVU422 sp, the following data types are available on the right:
BT.601 (Video Range), BT.601 (Full Range), BT.709 (Video Range), BT.709 (Full Range)
Different data types correspond to different CSC configurations. (The CSC factors are stored in the insert_op.cfg configuration file after model conversion.)
- BT.601 is the standard for standard-definition television (SDTV).
- BT.709 is the standard for high-definition television (HDTV).
The two standards are classified into narrow range (Video Range) and wide range (Full Range) according to their representation range.
The representation range of the narrow range is
and that of the wide range is
. For details about how to determine the standard of the input data, see "FAQs > How Do I Determine the Video Stream Format Standard When I Perform CSC on a Model Using AIPP?" in the ATC Instructions.
- If set to YUV400, CSC is not supported.
- If set to RGB package, BGR package, ARGB package, RGBA package, ABGR package or BGRA package, no input data type option is displayed on the right. After the model conversion is complete, the values of the following parameters in the data preprocessing configuration file insert_op.cfg vary depending on the setting of Model Image Format:
- If set to BGR package, the output is in RGB format. If set to RGB package, the output is in BGR format.
# Whether to enable R/B or U/V channel swap before CSC rbuv_swap_switch : true
- If set to BGR package, the output is in BGR format. If set to RGB package, the output is in RGB format.
# Whether to enable R/B or U/V channel swap before CSC rbuv_swap_switch : false
- If set to ARGB package, the output is in RGBA format. If set to ABGR package, the output is in BGRA format.
# Whether to enable R/B or U/V channel swap before CSC rbuv_swap_switch : false # Whether to enable RGBA->ARGB and YUVA->AYUV channel swap before CSC ax_swap_switch : true
- If set to ARGB package, the output is in BGRA format. If set to ABGR package, the output is in RGBA format.
rbuv_swap_switch : true ax_swap_switch : true
- If set to RGBA package, the output is in RGBA format. If set to BGRA package, the output is in BGRA format.
rbuv_swap_switch : false ax_swap_switch : false
- If set to RGBA package, the output is in BGRA format. If set to BGRA package, the output is in RGBA format.
rbuv_swap_switch : true ax_swap_switch : false
- If set to BGR package, the output is in RGB format. If set to RGB package, the output is in BGR format.
Input Image Resolution
Input image size.
If Input Image Format is set to YUV420 sp, the width and height must be even numbers.
Model Image Format
Model image format. The toggle is also the CSC switch (default to on). Turn the toggle switch on when the Input Image Format is inconsistent with the Model Image Format.
The model image format varies with the input image format.
- If the input image format is YUV420 sp, YVU420 sp, YUV422 sp, YVU422 sp, or BGR package, then Model Image Format is RGB or BGR.
- If the input image format is RGB package, then Model Image Format is RGB, BGR, or GRAY.
- If the input image format is YUV400, Input Image Format can only be set to GRAY.
- If the input image format is ARGB package, RGBA package, ABGR package, or BGRA package, then Model Image Format is RGBA or BGRA.
- If the AIPP configuration file selected for Aipp Configuration File in 2 is an abnormal file whose RGB data cannot be read, the default value of Model Image Format is GRAY.
Crop
Cropping switch (default off).
If the toggle is switched on, the following two parameters are displayed:
- Cropping Start: start of image cropping. The value range of Cropping Start [H][W] is narrower than that of Input Image Resolution [H][W].
- Cropping Area: size of the cropped image. The default width and height are consistent with those in the Shape text box in the Input Nodes area, as shown in Figure 1. The specified width and height cannot be greater than those of Input Image Resolution.
Note the following restrictions on image cropping:
- If Input Image Format is set to YUV420 sp or YVU420 sp, the values of Cropping Start [H][W] must be even numbers.
- If Input Image Format is set to YUV422 sp or YVU422 sp, the values of Cropping Start [W] must be even numbers.
- If Input Image Format is set to other values, there is no restriction on Cropping Start [H][W].
- If image cropping is switched on: Input Image Resolution >= Cropping Area + Cropping Start
If image cropping is switched on with padding switched off, Cropping Area [H][W] can be set to 0 or left empty. In this case, the Cropping Area [H][W] values are obtained from the H and W shape information in the Input Nodes area (the height and width of the model input), as shown in Figure 1.
Padding
Image padding switch. If this switch is enabled, the image padding function is available. This function is disabled by default.
The value range of Padding Area [L][R][B][T] is [0, 32]. Make sure the output height and width (after AIPP with padding is performed) are consistent with those required by the model.
Normalization
Normalization switch.
After the switch is turned on, Conversion Type is displayed, indicating the calculation rule, which contains the Mean, Min, and Variance configuration options.
Mean
Mean value of each channel.
This line is available only when Normalization is switched on.
- If Model Image Format is set to RGB, this parameter is displayed as Mean: [R][G][B]. The default value for each channel is 104, 117, or 123. You can change the default value as required.
- If Model Image Format is set to BGR, this parameter is displayed as Mean: [B][G][R]. The default value for each channel is 104, 117, or 123. You can change the default value as required.
- If Model Image Format is set to GRAY, the default value of this parameter is 104. You can change the default value as required.
- If Model Image Format is set to RGBA, this parameter is displayed as Mean: [R][G][B][A]. The default value for each channel is 104, 117, 123, or 0. You can change the default value as required.
- If Model Image Format is set to BGRA, this parameter is displayed as Mean: [B][G][R][A]. The default value for each channel is 104, 117, 123, or 0. You can change the default value as required.
Min
Minimum value of each channel.
This line is available only when Normalization is switched on.
- If Input Image Format is YUV420 sp, YVU420 sp, YUV422 sp, YVU422 sp, YUV400, RGB package, or BGR package, this parameter is displayed as Min: [R][G][B]. The default value is 0 for each channel.
- If Input Image Format is ARGB package, RGBA package, ABGR package, or BGRA package, this parameter is displayed as Min: [R][G][B][A]. The default value is 0 for each channel.
1/Variance
Reciprocal of the variance of each channel.
This line is available only when Normalization is switched on.
- If Input Image Format is YUV420 sp, YVU420 sp, YUV422 sp, YVU422 sp, YUV400, RGB package, or BGR package, this parameter is displayed as 1/Variance: [R][G][B]. The default value is 1.0 for each channel.
- If Input Image Format is ARGB package, RGBA package, ABGR package, or BGRA package, this parameter is displayed as 1/Variance: [R][G][B][A]. The default value is 1.0 for each channel.
If you set Image Pre-processing Mode to Dynamic, also configure the following parameters:
Input Node: (data)
AIPP switch by node.
This parameter is automatically enabled only when Type of the data node is set to UINT8 in the Input Nodes area, as shown in Figure 1. For a TensorFlow model, this parameter can be manually enabled when Type of the data node is set to FP32, Int32, Int64, or Bool in the Input Nodes area and the model width and height can be obtained, as shown in Figure 1.
Input Node: (im_info)
Switches dynamic AIPP on for the second input node. This parameter is available only when the model has two inputs.
This parameter is automatically enabled only when Type of the im_info node is set to UINT8 in the Input Nodes area, as shown in Figure 1. Note that im_info in Input Node: (im_info) varies depending on the parsed model.
Max Image Size (Byte)
Maximum size of the input image. Required in the dynamic AIPP scenario. (In the dynamic batch size scenario, N is set to the maximum batch size.)
- If Input Image Format is set to YUV400_U8: Max Image Size >= N * Input Image Resolution [W] * Input Image Resolution [H] * 1
- If Input Image Format is set to YUV420SP_U8: Max Image Size >= N * Input Image Resolution [W] * Input Image Resolution [H] * 1.5
- If Input Image Format is set to XRGB8888_U8: Max Image Size >= N * Input Image Resolution [W] * Input Image Resolution [H] * 4.
- If Input Image Format is set to RGB888_U8: Max Image Size >= N * Input Image Resolution [W] * Input Image Resolution [H] * 3
Pay attention to the following requirements on the input images:
If AIPP is switched on in model conversion, model inference requires the NHWC inputs. In this case, the input node configured with AIPP will be changed accordingly, which may be different from that specified by Input Format on the Model Information tab page in 1.
- Click Next, moving on to the Advanced Options Preview tab page to configure advanced options. See Figure 4.
The parameters are described as follows.
- Click Finish to start model conversion.
The model conversion log records are printed to the Output window in the lower part of MindStudio. If the message "Model converted successfully" is displayed, the model conversion is complete. The Output window also displays the model conversion commands, environment variables, model conversion result, model output path, and model conversion log path.
- After the model conversion is complete, find the generated .om model file for running in the operating environment, configuration file ${modelname}_config.json used in model conversion, and log file ModelConvert.txt, in the $HOME/modelzoo/resnet50/$Soc_Version directory.
- If Data Preprocessing is switched on, the data preprocessing configuration file insert_op.cfg is generated in the same directory as the .om file.
- If Operator Fusion is switched off, the fusion_switch.cfg configuration file is generated in the same directory as the .om model file to record the functions that are disabled.
Find the model conversion log file ModelConvert.txt in $HOME/modelzoo/resnet50/$Soc_Version. Information similar to the following is displayed:
drwxr-x--- 2 4096 Mar 10 16:46 ./ drwx------ 3 4096 Mar 10 16:45 ../ -rw------- 1 127 Mar 10 15:55 fusion_switch.cfg --Fusion switch configuration file -rw------- 1 453 Mar 10 16:45 insert_op.cfg --Data preprocessing configuration file -rw-r----- 1 453 Mar 10 16:45 ModelConvert.txt --Log file -rw------- 1 2095 Mar 10 18:03 resnet50_config.json --Model conversion configuration file. You can load this file to use the same configuration in future model conversion. -rw------- 1 51581408 Mar 10 16:46 resnet50.om --Model file running on the board
Exception Handling
- Symptom
If the selected model file contains operators unsupported by the Ascend AI Processor, the network analysis report is displayed during model conversion, as shown in Figure 5.
In the Summary area on the left:
- All Operator: indicates the number of operators in the model to be converted. The unsupported operators are also counted.
Click . In the Result Details area on the right, details about all operators of the model are displayed, including the operator type, operator name, and parsing result. If the operator fails to be parsed, the cause of the parsing failure is displayed in Description.
- UnSupported Operator: indicates the number of operators that are not supported by model conversion. Below will display the classification and specific reasons.
Click . All unsupported operators are displayed on the right.
- All Operator: indicates the number of operators in the model to be converted. The unsupported operators are also counted.
- Solution
- In the network analysis report window shown in Figure 5, choose Result Details on the right. If failed is shown in the Result column for an operator, the operator information is selected. Click one of solutions under Operation, for example, Creator Operator, to create a custom operator project.
If an operator project has been opened, a dialog box is displayed, as shown in Figure 6. You can add an operator to the current project or create a new project. If no operator project exists, a dialog box for creating an operator project is displayed.
For details about how to create a custom operator project, see Project Creation. - Create the custom operator project by taking the given procedure.
Operator Type custom operator in is automatically filled based on the operator type selected in the network analysis report. After the project is created, it is stored in $HOME/AscendProjects by default.
The directory structure and main files of the operator project are as follows:├── .idea ├── build //Files generated after build ├── cmake //Directory of public files related to build ├── framework //Directory of the operator plugin implementation files │ ├── tf_plugin //Directory of the operator plugin files and build rule files of the TensorFlow framework │ │ ├── tensorflow_add_plugin.cpp ...... ......
- Develop the custom operator. For details, see Operator Development.
After the custom operator is developed, try to convert the model again.
- In the network analysis report window shown in Figure 5, choose Result Details on the right. If failed is shown in the Result column for an operator, the operator information is selected. Click one of solutions under Operation, for example, Creator Operator, to create a custom operator project.









