Script Conversion

Model Conversion and Tuning Portals

  1. Choose View > Appearance > Toolbar from the menu bar, and then click on the displayed toolbar below the menu bar.
  2. Choose Ascend > Model Converter from the menu bar.

    For details, see Procedure.

Prerequisites

The model file and weight file of the model to be converted have been uploaded to the development environment where the Ascend-CANN-Toolkit resides by the MindStudio IDE installation user.

Procedure

The following uses the ATC tool as an example:

  1. Open the model conversion and tuning page, access the Model Information tab page, and upload the model file and weight file, as shown in Figure 1.
    Figure 1 Configuring model information (Linux)

    Table 1 describes the parameters on the Model Information tab page.

    Table 1 Parameter description

    Parameter

    Description

    Remarks

    Mode Type

    Run mode. The default value is ATC.

    Mandatory.

    • ATC converts a network model of an open-source framework to the offline model adapted to the Ascend AI Processor. This mode does not tune the model.
    • AOE makes full use of limited hardware resources to tune operators or subnets of a model to meet the performance requirements of operators and the entire network.

    Model File

    Model file. The write permission of other users on the model file needs to be canceled.

    Mandatory.

    Click on the right or enter the path of the source model file on the local server.

    NOTE:
    • If you get error message "Failed to get model input shape." when importing an ultra-large model, choose Help > Change Memory Settings from the menu bar. In the Memory Settings window displayed, increase the memory.
    • The configuration page only supports AOE tuning on a single model. To tune multiple models in a directory, run the AOE tuning command in the Terminal window in the lower part of the MindStudio IDE page. Example: aoe --job_type=1 --model_path=$home/xxx/

      $home/xxx/ indicates the path of the model to be tuned. Replace it with the actual path.

    Weight File

    Weight file.

    Required when the original framework is Caffe.

    • If the model file and weight file are placed in the same directory on the background server and their file names are the same, the weight file is automatically selected after the model file is selected.
    • If the model file and weight file are stored in different directories on the background server or in the same directory with different file names: Click on the right to select the weight file corresponding to the model file or enter the path of the .caffemodel weight file on the background server in the text box.

    Model Name

    Name of the output model file.

    Mandatory.

    • After a model file is selected, this parameter is automatically filled. You can change the name as required.
    • If a model file with the same name already exists in the model output path, a message is displayed after you click Next, asking you to replace the original file or rename the current model.

    Job Type

    Tuning mode.

    This parameter is mandatory when Mode Type is set to AOE.

    Value options are as follows:

    • 1-Subgraph Auto Tune(SGAT) (default): indicates subgraph tuning.
    • 2-Operator Auto Tune(OPAT): indicates operator tuning.

    Target SoC Version

    Target processor model.

    This parameter is mandatory when Mode Type is set to ATC.

    Set this parameter based on the processor form in the board environment.

    Output Path

    Model file output path.

    Mandatory.

    The default output path is $HOME/modelzoo/${Model Name}/${Target SoC Version}/. You can also enter a path or click on the right to customize a path.

    Input Format

    Input data format.

    Mandatory.

    This parameter can only be set to a single value.

    • If the original framework is Caffe, NCHW and ND (any dimension format with N ≤ 4) are supported. Defaults to NCHW.
    • If the original framework is ONNX, NCHW, NCDHW, and ND (any dimension format with N ≤ 4) are supported. Defaults to NCHW.
    • If the original framework is MindSpore, the value is NCHW.
    • If the original framework is TensorFlow, NCHW, NHWC, ND, NCDHW, and NDHWC are supported. Defaults to NHWC.

    Input Nodes

    Model input node information.

    Mandatory.

    • If a model file is parsed successfully, the shape and type information of the model input nodes will be displayed below.
    • If Input Nodes cannot be parsed from the model file selected, you need to manually enter the information. Click on the right. In the displayed dialog box, enter the name, shape, and data type of each input node.
    • If a model has multiple inputs, the shape and type information of each input node will be displayed below Input Nodes after the parsing is successful.
      NOTE:
      • If the original framework is MindSpore, Input Nodes does not automatically parse the input information in the corresponding model. You need to manually enter the information. If the information is not specified, the ATC tool automatically parses necessary information from the model when the atc command is executed in the command line.
      • The model conversion tool cannot delete nodes with dynamic shapes in models.

    Shape

    Input shape.

    The example values shown in Figure 1 indicate N (number of images processed per batch), C (channel count), H (height), and W (width) of the input data, respectively. For example, the channel count of RGB images is 3. If AIPP is enabled, the values of H and W are the height and width of the AIPP output.

    For settings of more input data formats, see Description about "Shape".

    Type

    Data type of the input node.

    Mandatory.

    If a model has multiple inputs, the Data Pre-Processing tab page can be configured only on nodes where Type is not FP16. On these nodes, if the H and W information in Shape cannot be obtained, the Data Pre-Processing tab page cannot be configured.

    For details about the data types supported by the original framework and whether the Data Pre-Processing tab page can be configured, see Description about "Type".

    Load Configuration

    Loads the configuration file of the last successful conversion.

    Optional.

    Whether a user model is successfully converted, a ${Model Name}_config.json configuration file is generated in $HOME/modelzoo/${Model Name}/${Target SoC Version}/. The configuration file records the model conversion configuration, including the model path, model name, input and output configurations, and data preprocessing configuration. You can click Load Configuration to load an existing configuration file and the corresponding configuration information will be automatically filled.

    Parameters on the Model Information page are described as follows:

    • Shape: The settings vary with the input data format.
      • Input Format is a format with a static shape, for example, NCHW or NCDHW.
        • Set to the dynamic batch size. Applies to the scenario where the number of images processed per inference batch is unfixed.

          Set N in the Shape text box to –1 and the Dynamic Batch Size text box is displayed under Shape. Enter the dynamic batch size profiles (separated by commas) in the text box. A maximum of 100 profiles are supported. Try to keep the batch size of each profile within [1, 2048]. For example, you can enter: 1,2,4,8

          If the Dynamic Batch Size parameter is set during model conversion, you need to add a call to aclmdlSetDynamicBatchSize before the aclmdlExecute call to set the runtime batch size when running an application project for model inference. For details about how to use the aclmdlSetDynamicBatchSize API, see the AscendCL API Reference > Model Loading and Execution in the AscendCL Application Software Development Guide (C&C++).

        • Set to the dynamic image size profiles. Applies to the scenario where image size per inference batch is unfixed.

          Set H and W in the Shape text box to –1 and the Dynamic Image Size text box is displayed under Shape. Enter at least two groups of dynamic image size profiles (separated by commas) in the text box. A maximum of 100 profiles are supported. For example, you can enter 112,112;224,224

          If the Dynamic Image Size parameter is set during model conversion, you need to add a call to aclmdlSetDynamicHWSize before the aclmdlExecute call to set the runtime image size when running an application project for model inference. For details about how to use the aclmdlSetDynamicHWSize API, see the AscendCL API Reference > Model Loading and Execution in the AscendCL Application Software Development Guide (C&C++).

          If dynamic image size is enabled during model conversion and the data preprocessing function in Data Pre-Processing is required, the Crop and Padding functions are unavailable.

          Dynamic batch size and dynamic image size are mutually exclusive.

      • Set to the dynamic dimension in ND format when Input Format is ND. Applies to the scenario where any dimension is processed each time during inference.

        Set the quantity and position of -1 in the Shape text box as required, and the Dynamic Dims text box is displayed under Shape. Enter 2 to 100 groups of dynamic dimension parameters in the text box. Separate the groups by semicolons (;), and parameters in a group by commas (,). The content in a group cannot be the same as that in another group. The minimum dimension in a group is 1. The parameter values in each group correspond to the parameters marked with -1 in Shape. If Shape contains several -1s, the corresponding number of parameter values in each group must be set. For example, if the input shape information is "-1,-1,-1,-1", the input Dynamic Dims parameter can be 1,224,224,3;2,448,448,6.

    • Type: supported data types.
      • Check the supported data types:
        • If the original framework type is Caffe or ONNX, the supported data types are FP32, FP16, and UINT8.
        • If the original framework type is MindSpore, the supported data types are FP32 and UINT8.
        • If the original framework type is TensorFlow, the supported input data types are FP32, FP16, UINT8, Int32, Int64, and Bool.
      • Determine whether the Data Pre-Processing tab page is configurable based on the value of Type.
        • If the original framework is Caffe, MindSpore, or ONNX, the Data Pre-Processing tab page in 2 can be configured only when Type is set to UINT8. If a model has multiple inputs, the Data Pre-Processing tab page can be configured only when Type is set to UINT8. If the value of Type is UINT8 but the H and W information in Shape cannot be obtained, the Data Pre-Processing tab page cannot be configured.
        • If the original framework is TensorFlow, the Data Pre-Processing tab page in 2 can be configured when Type is not FP16.
          • If Type is set to UINT8, the Data Pre-Processing tab is enabled by default and cannot be disabled.
          • If Type is set to FP32, Int32, Int64, or Bool, the Data Pre-Processing tab is disabled by default and can be manually enabled.
  2. Click Next, moving on to the Data Pre-Processing tab page, as shown in Figure 2.
    The data preprocessing capability is backed by the AI Pre-processing (AIPP) module of the Ascend AI Processor. It enables hardware-based image preprocessing including color space conversion (CSC), image normalization (by subtracting the mean value or multiplying a factor), image cropping (by specifying the crop start and cropping the image to the size required by the neural network), and much more.
    Figure 2 Data preprocessing

    Table 2 describes the parameters.

    Table 2 Parameters on the Data Pre-Processing page

    Parameter

    Description

    Remarks

    Image Pre-processing Mode

    AIPP mode for image preprocessing.

    • Static: static AIPP.
    • Dynamic: dynamic AIPP.

    -

    If you set Image Pre-processing Mode to Static, also configure the following parameters:

    Load Aipp Configuration

    AIPP configuration loading. Dynamic AIPP does not support this function.

    This function is disabled by default. After it is enabled, select the corresponding configuration file in the Aipp Configuration File text box.

    After the configuration file is loaded, all parameters under Input Node: (data) and Input Node: (im_info) are automatically set based on the configuration file. You can turn on the switch on the right of Aipp Configuration File to display all parameters of Input Node: (data) and Input Node: (im_info) in the lower part of the page, and modify them as required.

    Input Node: (data)

    AIPP switch by node.

    • This parameter is automatically enabled only when Type of the data node is set to UINT8 in the Input Nodes area, as shown in Figure 1.
    • For a TensorFlow model, this parameter can be manually enabled when Type of the data node is set to FP32, Int32, Int64, or Bool in the Input Nodes area and the model width and height can be obtained, as shown in Figure 1.

    Input Node: (im_info)

    Switches AIPP on for the second input node. This parameter is available only when the model has two inputs.

    This parameter is automatically enabled only when Type of the im_info node is set to UINT8 in the Input Nodes area, as shown in Figure 1. Note that im_info in Input Node: (im_info) varies depending on the parsed model.

    Input Image Format

    Input image format.

    For details about more image input formats, see Description about "Input Image Format".

    Input Image Resolution

    Input image size.

    • If Input Image Format is set to YUV420 sp, the width and height must be even numbers.
    • If dynamic image size profiles are set for Shape in 1, the size of each input image is unfixed, that is, the width and height values must be 0.

    Model Image Format

    Model image format. The toggle is also the CSC switch (defaulted to on). This function needs to be enabled when the input image format is different from that of the model processing file.

    The model image format varies with the input image format. For more information about the formats, see Description about "Model Image Format".

    Crop

    Cropping switch (default off).

    After this parameter is enabled, two another parameters are displayed in the lower part. For details about the parameter description and cropping restrictions, see Description about "Crop".

    Padding

    Image padding switch. If this switch is enabled, the image padding function is available. This function is disabled by default.

    The value range of Padding Area [L][R][B][T] is [0, 32]. Make sure the output height and width (after AIPP with padding is performed) are consistent with those required by the model.

    Normalization

    Normalization switch.

    After the switch is turned on, Conversion Type is displayed, indicating the calculation rule, which contains the Mean, Min, and Variance configuration options.

    Mean

    Mean value of each channel.

    This parameter is displayed only when Normalization is enabled. For details about the display modes and values of this parameter, see Description about "Mean".

    Min

    Minimum value of each channel.

    This parameter is displayed only when Normalization is enabled. For details about the display modes and values of this parameter, see Description about "Min".

    1/Variance

    Reciprocal of the variance of each channel.

    This parameter is displayed only when Normalization is enabled. For details about the display modes and values of this parameter, see Description about "1/Variance".

    If you set Image Pre-processing Mode to Dynamic, also configure the following parameters:

    Input Node: (data)

    AIPP switch by node.

    • This parameter is automatically enabled only when Type of the data node is set to UINT8 in the Input Nodes area, as shown in Figure 1.
    • For a TensorFlow model, this parameter can be manually enabled when Type of the data node is set to FP32, Int32, Int64, or Bool in the Input Nodes area and the model width and height can be obtained, as shown in Figure 1.

    Input Node: (im_info)

    Switches dynamic AIPP on for the second input node. This parameter is available only when the model has two inputs.

    This parameter is automatically enabled only when Type of the im_info node is set to UINT8 in the Input Nodes area, as shown in Figure 1. Note that im_info in Input Node: (im_info) varies depending on the parsed model.

    Max Image Size (Byte)

    Maximum size of the input image. Required in the dynamic AIPP scenario. (In the dynamic batch size scenario, N is set to the maximum batch size.)

    The value of this parameter varies with the image input format. For details, see Description about "Max Image Size (Byte)".

    Parameters on the Data Pre-Processing page are described as follows:

    • Input Image Format: The settings vary with the input image format.
      • If the setting is YUV420 sp, YVU420 sp, YUV422 sp, or YVU422 sp, the following data types are available on the right:

        BT.601 (Video Range), BT.601 (Full Range), BT.709 (Video Range), BT.709 (Full Range)

        Different data types correspond to different CSC configurations. (The CSC factors are stored in the insert_op.cfg configuration file after model conversion.)

        • BT.601 is the standard for standard-definition television (SDTV).
        • BT.709 is the standard for high-definition television (HDTV).

        The two standards are classified into narrow range (Video Range) and wide range (Full Range) according to their representation range.

        The representation range of the narrow range is and that of the wide range is . For details about how to determine the standard of the input data, see How Do I Determine the Video Stream Format Standard When I Perform CSC on a Model Using AIPP? in the ATC Instructions.

      • If the setting is YUV400, CSC is not supported.
      • If the setting is RGB package, BGR package, ARGB package, RGBA package, ABGR package or BGRA package, no input data type option is displayed on the right. After the model conversion is complete, the values of the following parameters in the data preprocessing configuration file insert_op.cfg vary depending on the setting of Model Image Format:
        • If the setting is BGR package, the output is in RGB format. If the setting is RGB package, the output is in BGR format.
          # Whether to enable R/B or U/V channel swap before CSC
          rbuv_swap_switch : true
        • If the setting is BGR package, the output is in BGR format. If the setting is RGB package, the output is in RGB format.
          # Whether to enable R/B or U/V channel swap before CSC
          rbuv_swap_switch : false
        • If the setting is ARGB package, the output is in RGBA format. If the setting is ABGR package, the output is in BGRA format.
          # Whether to enable R/B or U/V channel swap before CSC
          rbuv_swap_switch : false     
          # Whether to enable RGBA->ARGB and YUVA->AYUV channel swap before CSC
          ax_swap_switch : true
        • If the setting is ARGB package, the output is in BGRA format. If the setting is ABGR package, the output is in RGBA format.
          rbuv_swap_switch : true 
          ax_swap_switch : true
        • If the setting is RGBA package, the output is in RGBA format. If the setting is BGRA package, the output is in BGRA format.
          rbuv_swap_switch : false 
          ax_swap_switch : false
        • If the setting is RGBA package, the output is in BGRA format. If the setting is BGRA package, the output is in RGBA format.
          rbuv_swap_switch : true 
          ax_swap_switch : false
    • Model Image Format: The model image format varies with the input image format. The following scenarios are involved.
      • If the input image format is YUV420 sp, YVU420 sp, YUV422 sp, YVU422 sp, or BGR package, then Model Image Format is RGB or BGR.
      • If the input image format is RGB package, then Model Image Format is RGB, BGR, or GRAY.
      • If the input image format is YUV400, Input Image Format can only be set to GRAY.
      • If the input image format is ARGB package, RGBA package, ABGR package, or BGRA package, then Model Image Format is RGBA or BGRA.
      • If the AIPP configuration file selected for Aipp Configuration File in 2 is an abnormal file whose RGB data cannot be read, the default value of Model Image Format is GRAY.
    • Crop: After the cropping function is enabled, the parameters and cropping restrictions are described as follows:
      • If the toggle is switched on, the following two parameters are displayed:
        • Cropping Start: start of image cropping. The value range of Cropping Start [H][W] is narrower than that of Input Image Resolution [H][W].
        • Cropping Area: size of the cropped image. The default width and height are consistent with those in the Shape text box in the Input Nodes area, as shown in Figure 1. The specified width and height cannot be greater than those of Input Image Resolution.
      • Note the following restrictions on image cropping:
        • If Input Image Format is set to YUV420 sp or YVU420 sp, the values of Cropping Start [H][W] must be even numbers.
        • If Input Image Format is set to YUV422 sp or YVU422 sp, the values of Cropping Start [W] must be even numbers.
        • If Input Image Format is set to other values, there is no restriction on Cropping Start [H][W].
        • If image cropping is switched on: Input Image Resolution >= Cropping Area + Cropping Start

        If image cropping is switched on with padding switched off, Cropping Area [H][W] can be set to 0 or left empty. In this case, the Cropping Area [H][W] values are obtained from the H and W shape information in the Input Nodes area (the height and width of the model input), as shown in Figure 1.

    • Mean: Based on the value of Model Image Format, the display modes and values of this parameter are described as follows:
      • If Model Image Format is set to RGB, this parameter is displayed as Mean: [R][G][B]. The default value for each channel is 104, 117, or 123. You can change the default value as required.
      • If Model Image Format is set to BGR, this parameter is displayed as Mean: [B][G][R]. The default value for each channel is 104, 117, or 123. You can change the default value as required.
      • If Model Image Format is set to GRAY, the default value of this parameter is 104. You can change the default value as required.
      • If Model Image Format is set to RGBA, this parameter is displayed as Mean: [R][G][B][A]. The default value for each channel is 104, 117, 123, or 0. You can change the default value as required.
      • If Model Image Format is set to BGRA, this parameter is displayed as Mean: [B][G][R][A]. The default value for each channel is 104, 117, 123, or 0. You can change the default value as required.
    • Min: Based on the value of Input Image Format, the display modes and values of this parameter are described as follows:
      • If Input Image Format is YUV420 sp, YVU420 sp, YUV422 sp, YVU422 sp, YUV400, RGB package, or BGR package, this parameter is displayed as Min: [R][G][B]. The default value is 0 for each channel.
      • If Input Image Format is ARGB package, RGBA package, ABGR package, or BGRA package, this parameter is displayed as Min: [R][G][B][A]. The default value is 0 for each channel.
    • 1/Variance: Based on the value of Input Image Format, the display modes and values of this parameter are described as follows:
      • If Input Image Format is YUV420 sp, YVU420 sp, YUV422 sp, YVU422 sp, YUV400, RGB package, or BGR package, this parameter is displayed as 1/Variance: [R][G][B]. The default value is 1.0 for each channel.
      • If Input Image Format is ARGB package, RGBA package, ABGR package, or BGRA package, this parameter is displayed as 1/Variance: [R][G][B][A]. The default value is 1.0 for each channel.
    • Max Image Size (Byte): The value of this parameter varies with the image input format. The options are as follows:
      • If Input Image Format is set to YUV400_U8: Max Image Size >= N * Input Image Resolution [W] * Input Image Resolution [H] * 1
      • If Input Image Format is set to YUV420SP_U8: Max Image Size >= N * Input Image Resolution [W] * Input Image Resolution [H] * 1.5
      • If Input Image Format is set to XRGB8888_U8: Max Image Size >= N * Input Image Resolution [W] * Input Image Resolution [H] * 4.
      • If Input Image Format is set to RGB888_U8: Max Image Size >= N * Input Image Resolution [W] * Input Image Resolution [H] * 3

    Pay attention to the following requirements on the input images:

    If AIPP is switched on in model conversion, model inference requires the NHWC inputs. In this case, the input node configured with AIPP will be changed accordingly, which may be different from that specified by Input Format on the Model Information tab page in 1.

  3. Click Next, moving on to the Advanced Options Preview tab page to configure advanced options. See Figure 3.
    Figure 3 Advanced options preview

    Table 3 describes the parameters.

    Table 3 Parameters on the Advanced Options Preview page

    Parameter

    Description

    Remarks

    Operator Fusion

    Whether to disable the fusion function (defaulted to on).
    • Switch the toggle on to disable the fusion function. In this case, Fusion Passes is displayed.
    • Switch the toggle off to enable the fusion function.

    When the toggle is switched on, the fusion_switch.cfg configuration file is generated in the same directory as the .om model file, which records the features that are disabled.

    NOTE:

    When the fusion function is disabled, only the fusion patterns specified in the configuration file are disabled. For details about the available fusion patterns that can be disabled, see the Graph Fusion and UB Fusion Patterns. Other fusion patterns cannot be disabled due to the system mechanism.

    Fusion Passes

    Fusion pattern to be disabled.

    If Operator Fusion is enabled, this option is displayed. For details about the fusion patterns disabled by default and how to add the fusion patterns to be disabled, see Description about "Fusion Passes".

    Log Print Level

    Log level configuration, which determines whether to print logs of the corresponding level during model conversion.

    • If this parameter is enabled, you can set a log level and print logs of the set level.
    • If this parameter is disabled, you are not allowed to print logs. This parameter is disabled by default.

    Log level.

    • error: outputs error and event logs.
    • warning: outputs warning, error, and event logs.
    • info: outputs info, warning, error, and event logs.
    • debug: outputs debug, info, warning, error, and event logs.

    Additional Arguments

    Adds additional model conversion arguments.

    • Enters space-delimited arguments that are not available on the GUI but supported by the ATC or AOE tool. A maximum of 2048 characters are allowed.
    • Enter ATC or AOE command line arguments in the text box below. Separate multiple arguments with spaces. For details about the parameters, see "Command-Line Options" in the ATC Instructions or "AOE Command-Line Options" in the AOE Instructions, for example, --log=info.
    • If the GUI parameters and corresponding command-line arguments in the ATC tool are configured for the same function, the command-line arguments will take effect with a higher priority. For example, the Log Print Level parameter on this tab page corresponds to the --log parameter in the ATC tool. If you set --log to another value, such as --log="info" in Additional Arguments, the mode specified by the Log Print Level parameter does not take effect.

    Environment Variables

    Environment variable. This parameter is optional. Set it as required.

    • Add environment variables in the text box, for example, environment variable_1=value1;environment variable_2=value2. Use semicolons (;) to separate multiple environment variables.

      Example: TE_PARALLEL_COMPILER=8;REPEAT_TUNE=False

      The environment variables are described as follows:

      • TE_PARALLEL_COMPILER: environment variable required for operator build in the AOE scenario.
      • REPEAT_TUNE: environment variable required for re-initiating tuning in the AOE scenario.
    • Click the icon next to the text box. In the dialog box displayed, click to enter a value.
      • Type PATH_1 in the Name field.
      • Type the value (Value 1) in the Value field.

    For details about the environment variable configuration, see "Tuning in Offline Inference Scenarios (Other Inference Devices) > Environment Variable Configuration" in the AOE Instructions.

    Command Preview

    Previews the ATC or AOE arguments configured for model conversion and tuning. This setting cannot be modified.

    • After required arguments are set on all tab pages, you can preview the ATC or AOE commands converted from the GUI parameters. For example, if Log Print Level is set to error, the ATC command displayed in the Command Preview area is --log=error.
    • If you set an existing argument in Additional Arguments, for example, --log="info", the corresponding command-line argument is added to Command Preview. During model conversion and tuning, the recently added argument in Command Preview will overwrite the existing one.

    Parameters on the Advanced Options Preview page are described as follows:

    • Fusion Passes: The fusion patterns disabled by default and how to add the fusion patterns to be disabled are described as follows:
      • Fusion pattern disabled by default:

        V100RequantFusionPass, V200RequantFusionPass, ConvConcatFusionPass, SplitConvConcatFusionPass, TbeEltwiseQuantFusionPass, TbeConvDequantVaddReluQuantFusionPass, TbeConvDequantVaddReluFusionPass, TbeConvDequantQuantFusionPass, TbeDepthwiseConvDequantFusionPass, TbeFullyconnectionElemwiseDequantFusionPass, TbeConv2DAddMulQuantPass, TbePool2dQuantFusionPass, TbeAippConvReluQuantFusion, TbeCommonRules0FusionPass, TbeCommonRules2FusionPass

      • Adding fusion patterns to be disabled:
        • Enter the fusion patterns to be disabled in the text box in the format of fusion_pattern_name1:off;fusion_pattern_name2:off. Separate multiple fusion patterns by semicolons (;).
        • Click the icon next to the text box. In the dialog box displayed, click to enter a value.
          • Enter fusion pattern name 1 in the Name text box.
          • Enter the environment variable value off in the Value text box.

        Ensure that the entered fusion pattern is correct.

  4. Click Finish to start model conversion.

    The model conversion log records are printed to the Output window in the lower part of MindStudio IDE. If the message "Model converted successfully" is displayed, the model conversion is complete. The Output window also displays the model conversion commands, environment variables, model conversion result, model output path, and model conversion log path.

    Figure 4 Successful model conversion
  5. After the model conversion is complete, find the generated .om model file for running in the operating environment, configuration file ${modelname}_config.json used in model conversion, and log file ModelConvert.txt, in the $HOME/modelzoo/resnet50/$Soc_Version directory.
    • If Data Preprocessing is switched on, the data preprocessing configuration file insert_op.cfg is generated in the same directory as the .om file.
    • If Operator Fusion is switched off, the fusion_switch.cfg configuration file is generated in the same directory as the .om model file to record the functions that are disabled.

    When Mode Type is set to ATC, find the model conversion log file ModelConvert.txt in $HOME/modelzoo/resnet50/$Soc_Version. Information similar to the following is displayed:

    1
    2
    3
    4
    5
    6
    7
    drwxr-x--- 2       4096 Mar 10 16:46 ./
    drwx------ 3       4096 Mar 10 16:45 ../
    -rw------- 1       127 Mar 10 15:55 fusion_switch.cfg         --Fusion switch configuration file
    -rw------- 1       453 Mar 10 16:45 insert_op.cfg              --Data preprocessing configuration file
    -rw-r----- 1       453 Mar 10 16:45 ModelConvert.txt          --Log file
    -rw------- 1       2095 Mar 10 18:03 resnet50_config.json    --Model conversion configuration file. You can load this file to use the same configuration in future model conversion.
    --rw------- 1       51581408 Mar 10 16:46 resnet50.om           --Model file running on the board
    

    When Mode Type is set to AOE, find the model conversion log file ModelConvert.txt in $HOME/modelzoo/resnet50/aoe_type$Job Type. Information similar to the following is displayed:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    dggphicprd32833:~/modelzoo/resnet50/aoe_type1$ ll
    total 51416
    drwxr-x--- 3       4096 Feb 24 16:16 ./
    drwxr-x--- 4       4096 Feb 24 16:10 ../
    -r-------- 1       475 Feb 24 16:16 aoe_result_sgat_20230224161101211040_pid57522.json        --Tuning result file (sgat for subgraph tuning and opat for operator tuning)
    drwxr-x--- 3       4096 Feb 24 16:11 aoe_workspace/                   
    -rw-r----- 1       6732 Feb 24 16:15 fusion_result.json           
    -rw------- 1       454 Feb 24 16:10 fusion_switch.cfg                             
    -rw------- 1       577 Feb 24 16:10 insert_op.cfg           
    -rw-r----- 1       849212 Feb 24 16:16 ModelConvert.txt
    -rw------- 1       3025 Feb 24 16:16 resnet50_config.json
    -rw------- 1       51752967 Feb 24 16:15 resnet50.om
    

Exception Handling

  • Symptom

    If the selected model file contains operators unsupported by the Ascend AI Processor, the network analysis report is displayed during model conversion, as shown in Figure 5.

    Figure 5 Network analysis report

    In the Summary area on the left:

    • All Operator: indicates the number of operators in the model to be converted. The unsupported operators are also counted.

      Click All Operator. In the Result Details area on the right, details about all operators of the model are displayed, including the operator type, operator name, and parsing result. If the operator fails to be parsed, the cause of the parsing failure is displayed in Description.

    • UnSupported Operator: indicates the number of operators that are not supported by model conversion. Below will display the classification and specific reasons.

      Click UnSupported Operator. All unsupported operators are displayed on the right.

  • Solution
    1. In the network analysis report window shown in Figure 5, choose Result Details on the right. If failed is shown in the Result column for an operator, the operator information is selected. Click one of solutions under Operation, for example, Creator Operator, to create a custom operator project.

      If an operator project has been opened, a dialog box is displayed, as shown in Figure 6. You can add an operator to the current project or create a new project. If no operator project exists, a dialog box for creating an operator project is displayed.

      For details about how to create a custom operator project, see Operator Project Creation.
      Figure 6 Message displayed upon the creation of an operator project
    2. Create the custom operator project by taking the given procedure.

      Operator Type custom operator in New Project > Ascend Operator is automatically filled based on the operator type selected in the network analysis report. After the project is created, it is stored in $HOME/AscendProjects by default.

      The directory structure and main files of the operator project are as follows:
      ├── .idea
      ├── build                                  //Intermediate files generated after build
      ├── cmake                                  //Directory of public files related to build
      ├── framework                              //Directory of the operator plugin implementation files
      │   ├── tf_plugin                         //Directory of the operator plugin files and build rule files of the TensorFlow framework
      │   │   ├── tensorflow_add_plugin.cpp 
      ......
      ......
    3. Develop the custom operator. For details, see Operator Development.

      After the custom operator is developed, try to convert the model again.