Using an AscendCL Sample Project

This section describes how to use the AscendCL sample project ACL ResNet-50(C++) or ACL ResNet-50(Python). For more examples, see https://gitee.com/ascend/samples.

Description

This sample shows how to classify images based on the Caffe ResNet-50 network (single input with batch size = 1).

Convert the model file of the Caffe ResNet-50 network into an .om offline model adapted to the Ascend AI Processor. In the sample, the .om file is loaded to perform synchronous inference on the two .jpg images, the inference results are obtained and processed, and the class indexes with the top 5 confidence values are output.

Figure 1 Sample diagram

Principles

The following table lists the key functions involved in this sample. Take C++ APIs as an example. For details about the APIs, see the Application Software Development Guide (C&C++) or Application Software Development Guide (Python).

Initialization

  • aclInit: initializes AscendCL.
  • aclFinalize: deinitializes AscendCL.

Device Management

  • aclrtSetDevice: sets the compute device.
  • aclrtGetRunMode: obtains the run mode of the Ascend AI Software Stack. The internal processing process varies according to the run mode.
  • aclrtResetDevice: resets the compute device and cleans up all resources associated with the device.

Context Management

  • aclrtCreateContext: creates a context.
  • aclrtDestroyContext: destroys a context.

Stream Management

  • aclrtCreateStream: creates a stream.
  • aclrtDestroyStream: destroys a stream.

Memory Management

  • aclrtMalloc: allocates device memory.
  • aclrtFree: frees device memory.

Data Transfer

aclrtMemcpy (for running the application on the host):

  • Transfers data from the host to the device for decoding.
  • Transfers the inference result from the device to the host.

If your application runs on the device, data transfer is not needed.

If the application runs on the developer board, data transfer is not needed.

Model Inference

  • aclmdlLoadFromFileWithMem: loads a model from an .om file.
  • aclmdlExecute: performs synchronous model inference.
  • aclmdlUnload: unloads a model.

Data Postprocessing

Provides sample code to process the model inference result and display the class indexes with top 5 confidences on the device.

The sample provides a custom API DumpModelOutputResult, which is used to write the model inference result to a file (after the executable file is executed, the inference result file is generated in the directory of the executable file in the operating environment). This API is not called by default. To call this API, add the following code before the OutputModelResult API in sample_process.cpp in advance.

// print the top 5 confidence values with indexes.use function DumpModelOutputResult
// if want to dump output result to file in the current directory
processModel.DumpModelOutputResult();
processModel.OutputModelResult();

ACL ResNet-50(C++) Usage Guide

  1. Create an application project. For details, refer to Creating an Application Project to create the ACL ResNet-50(C++) sample project.
  2. Prepare model files and data.
    1. Obtain model files.
      Download the ResNet-50 network model file (*.prototxt) and weight file (*.caffemodel) from the following links and upload them to the same directory on the MindStudio installation server as the MindStudio installation user:
      • ResNet-50 network model file (*.prototxt): Click here to download the file.
      • ResNet-50 weight file (*.caffemodel): Click here to download the file.
    2. Convert your model.

      Before adding model files, convert the model trained in a third-party framework to an offline model adapted to the Ascend AI Processor (*.om). For details about the model conversion function, see Model Conversion.

      1. On the MindStudio top menu bar, choose Ascend > Model Converter. The Model Converter page is displayed.
      2. Set the model conversion information as follows:
        • Model File: Select the uploaded ResNet-50 network model file (*.prototxt).
        • Weight File: Select the uploaded ResNet-50 weight file (*.caffemodel).
        • Model Name: Enter a model name, for example, resnet50.
        • Target SoC Version: Specify the value as required.
        • OutputPath: Select the output directory of the generated .om model. The default output path is $HOME/modelzoo/${Model Name}/${Target SoC Version}/.
      3. Click Next. In this sample, do not configure the Data Pre-Processing function. Directly click Next to go to the next step.
      4. On the Advanced Options Preview page, click Command Preview to check the generated model conversion command. After confirmation, click Finish to start model conversion.
    3. Add one or more model files.
      Upload the .om model files to the project_file_root_directory/model folder. You can view the added model files in the Project window.
      ├── ProjectName
      │   ├── model          
      │   │   ├──resnet50.om           // Model file.
    4. Prepare inference data. Obtain the input images of this sample from the following links and perform operations based on the actual application scenario.
      Links: data image 1 and data image 2
      • Linux application scenario
        1. Upload the obtained file to the project_directory/data directory in the development environment as the running user. If the directory does not exist, create one.
        2. Log in to the development environment as the running user.
        3. Process the image data in either of the following ways:
          1. Configure the Python interpreter. For details, see Python SDK Dependency Settings. In the MindStudio project directory, find and right-click script/transferPic.py and choose Run'transferPic' from the shortcut menu. In the displayed run configuration page, set Working directory to project_directory/data. After the configuration is complete, click OK. Right-click transferPic.py again and choose Run'transferPic' from the shortcut menu.
          2. Switch to the project_directory/data directory and run the transferPic.py script.
            python3 ../script/transferPic.py

          Convert the .jpg files to the .bin files and resize the images from 1024 x 683 to 224 x 224. Find the generated .bin files in the project_directory/data directory.

          If you get the following error message during script execution, the Pillow library is missing. Run the pip3 install Pillow --user command to install the Pillow library.

          ModuleNotFoundError: No module named 'PIL'
      • Windows application scenario
        1. After obtaining the images, perform the following steps to convert the image format. Assume that the images are stored in C:\Users\user1\AscendProjects\MyApp1\data.
        2. Open the system CLI.
        3. Run the following command to go to the C:\Users\user1\AscendProjects\MyApp1\data directory:
          cd C:\Users\user1\AscendProjects\MyApp1\data
        4. Run the following command to convert the image format:
          python ..\script\transferPic.py
  3. Build the application project.
    On the MindStudio project page, choose Build > Edit Build Configuration... from the top menu bar.
    1. Set build parameters and click to add configurations.
      The configuration option marked with (default) in the list is the default one. To switch the default option to a custom option, select the custom option and click .
      Figure 2 Build configuration
      Table 1 Description of build configuration parameters

      Parameter Configuration

      Description

      Name

      Build configuration name.

      Build Type

      Configuration type. The options are Debug and Release.

      Toolchain

      Toolchain configurator, which supports local and remote build. For details about the configuration, see Toolchains.

      CMake options

      CMake option. The default value is -DCMAKE_BUILD_TYPE=Debug -DCMAKE_SKIP_RPATH=TRUE.

      Build directory

      Relative path of the build directory, which is relative to the project directory.

      Build options

      Build acceleration option.

      Environment Variables

      Environment variable configuration.

      You can manually configure the environment variables or click to configure them in the dialog box displayed.

    2. Click Save to save the build project.
  4. Start the build.

    On the MindStudio project page, choose Build > Build CMake Project from the top menu bar.

  5. Run the application project.

    On the MindStudio project page, choose Run > Edit Configuration... from the top menu bar.

    Click + in the upper right corner, create a CMake Application run configuration, and perform the following steps to configure it:
    1. Configure run parameters.
      Figure 3 shows the configuration examples in the Linux application scenarios. After the configuration is complete, click OK to close the run configuration page. (Because a relative path is used in the sample project, set Working Directory to the directory where the main executable file is located.)
      Figure 3 Run configuration page
    2. On the MindStudio project page, choose Run > Run... from the top menu bar. In the dialog box displayed, select the created run configuration and run the application.

      Figure 4 shows an example of the execution result. The indexes and values vary according to the actual situation of the operating environment.

      Figure 4 Execution result
      If an error message is displayed during application execution, view the detailed logs of the operating environment. For details, see the Log Reference.
      • The default log path is $HOME/ascend/log. For details, see the "Log Files" section.
      • The default log level is ERROR. For details about how to query or change the log level, see the "Log Level Setting" section.

ACL ResNet-50(Python) Usage Guide

  1. Create the ACL ResNet-50(Python) sample project by referring to Creating an Application Project.
  2. Prepare model files and data.
    1. Obtain model files.
      Download the ResNet-50 network model file (*.prototxt) and weight file (*.caffemodel) from the following links and upload them to the same directory on the MindStudio installation server as the MindStudio installation user:
      • ResNet-50 network model file (*.prototxt): Click Link to download the file.
      • ResNet-50 weight file (*.caffemodel): Click here to download the file.
    2. Convert your model.

      Convert third-party models into .om offline models that adapt to the Ascend AI Processor. For details about the model conversion function, see Model Conversion.

      1. On the MindStudio top menu bar, choose Ascend > Model Converter. The Model Converter page is displayed.
      2. Set the model conversion information as follows:
        • Model File: Select the uploaded ResNet-50 network model file (*.prototxt).
        • Weight File: Select the uploaded ResNet-50 weight file (*.caffemodel).
        • Model Name: Enter a model name, for example, resnet50.
        • Target SoC Version: Specify the value as required.
        • OutputPath: Select the output directory of the generated .om model. The default output path is $HOME/modelzoo/${Model Name}/${Target SoC Version}/.
      3. Click Next. In this sample, do not configure the Data Pre-Processing function. Directly click Next to go to the next step.
      4. On the Advanced Options Preview page, click Command Preview to check the generated model conversion command. After confirmation, click Finish to start model conversion.
    3. Add one or more model files.
      Upload the .om model files to the project_file_root_directory/model folder. You can view the added model files in the Project window.
      ├── ProjectName
      │   ├── model          
      │   │   ├──resnet50.om           // Model file.
    4. Prepare inference data. Obtain the input images of this sample from the following links and perform operations based on the actual application scenario.
      Links: data image 1 and data image 2
      • Linux application scenario

        Upload the obtained file to the project_directory/data directory in the development environment as the running user. If the directory does not exist, create one.

      • Windows application scenario

        After obtaining the images, save them to the project_directory/data directory, for example, C:\Users\user1\AscendProjects\MyApp1\data.

  3. Run the application project. (For details about how to configure the Python interpreter, see Python SDK Dependency Settings.)
    On the MindStudio project page, choose Run > Edit Configuration... from the top menu bar.
    1. Configure run parameters.
      Figure 5 shows a configuration example in the Linux application scenario. After the configuration is complete, click Apply to save the run configuration and click OK to close the run configuration dialog box.
      Figure 5 Run configuration dialog box
    2. On the MindStudio project page, choose Run > Run... from the top menu bar. In the dialog box displayed, select the created run configuration and run the application.

      Figure 6 shows an example of the execution result. The index and value vary according to the actual situation of the operating environment.

      Figure 6 Execution result
      If an error message is displayed during application execution, view the detailed logs of the operating environment. For details, see the Log Reference.
      • The default log path is $HOME/ascend/log. For details, see the "Log Files" section.
      • The default log level is ERROR. For details about how to query or change the log level, see the "Log Level Setting" section.