Quick Start

This section describes how to quickly perform AOE tuning on models trained on different frameworks. The following uses operator tuning as an example. If subgraph tuning is required, change the value of job_type in the command to 1.

Prerequisites

Network Model Trained on the Open Source TensorFlow Framework

  1. Obtain a TensorFlow network model.

    Click here to download the .pb model file of the ResNet-50 network and upload the file to any directory in the development environment as the CANN running user, for example, ${HOME}/module/.

  2. Run the following command to perform AOE tuning. (The path and file arguments are for reference only).
    aoe --framework=3 --model=${HOME}/module/resnet50_tensorflow_1.7.pb --job_type=2

    For details about the command-line options, see AOE Command-Line Options.

  3. If the following information is displayed, the AOE tuning is complete:
    1
    Aoe process finished
    

    After the tuning is complete, the following files are generated:

    • Custom repository: If the conditions for generating a custom repository are met (see Figure 3), a custom repository is generated. The generated custom repository is stored in the ${HOME}/Ascend/latest/data/aoe/custom/op/${soc_version} directory by default. For details about how to use the tuned custom repository, see Usage of Tuned Custom Repositories.
    • .om model: The model is stored in ${WORK_PATH}/aoe_workspace/${model_name}_${timestamp}/tunespace/result/${model_name}_${timestamp}_tune.om (or ${model_name}_${timestamp}_tune_${os}_${arch}.om) by default. This model can be directly used for inference.
    • Tuning result file: After the tuning is complete, a file named aoe_result_opat_${timestamp}_${pidxxx}.json is generated in the working directory where the tuning is performed. This file records the information about the tuned operators. For details about the fields in this file, see Table 1.

ONNX Network Model

  1. Obtain an ONNX network model.

    Click here to go to the ModelZoo page and obtain the .onnx model file by referring to "Getting Started > Model Inference" in README.md. Then upload the .onnx model file to any directory in the development environment as the CANN running user, for example, ${HOME}/module/.

  2. Run the following command to perform AOE tuning. (The path and file arguments are for reference only).
    aoe --framework=5 --model=${HOME}/module/resnet50_pytorch_1.4.onnx --job_type=2

    For details about the command-line options, see AOE Command-Line Options.

  3. If the following information is displayed, the AOE tuning is complete:
    1
    Aoe process finished
    

    After the tuning is complete, the following files are generated:

    • Custom repository: If the conditions for generating a custom repository are met (see Figure 3), a custom repository is generated. The generated custom repository is stored in the ${HOME}/Ascend/latest/data/aoe/custom/op/${soc_version} directory by default. For details about how to use the tuned custom repository, see Usage of Tuned Custom Repositories.
    • .om model: The model is stored in ${WORK_PATH}/aoe_workspace/${model_name}_${timestamp}/tunespace/result/${model_name}_${timestamp}_tune.om (or ${model_name}_${timestamp}_tune_${os}_${arch}.om) by default. This model can be directly used for inference.
    • Tuning result file: After the tuning is complete, a file named aoe_result_opat_${timestamp}_${pidxxx}.json is generated in the working directory where the tuning is performed. This file records the information about the tuned operators. For details about the fields in this file, see Table 1.

Network Model Trained on the MindSpore Framework

  1. Obtain a MindSpore network model.

    Click here to download the model file of the ResNet-50 network (only .air format is supported) and upload the file to any directory in the development environment as the CANN running user, for example, ${HOME}/module/.

  2. Run the following command to perform AOE tuning. (The path and file arguments are for reference only).
    aoe --framework=1 --model=${HOME}/module/ResNet50_for_MindSpore_1.4.air --job_type=2

    For details about the command-line options, see AOE Command-Line Options.

  3. If the following information is displayed, the AOE tuning is complete:
    1
    Aoe process finished
    

    After the tuning is complete, the following files are generated:

    • Custom repository: If the conditions for generating a custom repository are met (see Figure 3), a custom repository is generated. The generated custom repository is stored in the ${HOME}/Ascend/latest/data/aoe/custom/op/${soc_version} directory by default. For details about how to use the tuned custom repository, see Usage of Tuned Custom Repositories.
    • .om model: The model is stored in ${WORK_PATH}/aoe_workspace/${model_name}_${timestamp}/tunespace/result/${model_name}_${timestamp}_tune.om (or ${model_name}_${timestamp}_tune_${os}_${arch}.om) by default. This model can be directly used for inference.
    • Tuning result file: After the tuning is complete, a file named aoe_result_opat_${timestamp}_${pidxxx}.json is generated in the working directory where the tuning is performed. This file records the information about the tuned operators. For details about the fields in this file, see Table 1.

Network Model Trained on the Open Source Caffe Framework

  1. Obtain a Caffe network model.

    Download the .prototxt model file and .caffemodel weight file of the ResNet-50 network and upload the files to any path in the operating environment as the CANN running user, for example, ${HOME}/module/.

    • ResNet-50 network model file (*.prototxt): Click here to download the file.
    • ResNet-50 weight file (*.caffemodel): Click here to download the file.
  2. Run the following command to perform AOE tuning. (The path and file arguments are for reference only).
    aoe --framework=0 --model=${HOME}/module/resnet50.prototxt --weight=${HOME}/module/resnet50.caffemodel --job_type=2

    For details about the command-line options, see AOE Command-Line Options.

  3. If the following information is displayed, the AOE tuning is complete:
    1
    Aoe process finished
    

    After the tuning is complete, the following files are generated:

    • Custom repository: If the conditions for generating a custom repository are met (see Figure 3), a custom repository is generated. The generated custom repository is stored in the ${HOME}/Ascend/latest/data/aoe/custom/op/${soc_version} directory by default. For details about how to use the tuned custom repository, see Usage of Tuned Custom Repositories.
    • .om model: The model is stored in ${WORK_PATH}/aoe_workspace/${model_name}_${timestamp}/tunespace/result/${model_name}_${timestamp}_tune.om (or ${model_name}_${timestamp}_tune_${os}_${arch}.om) by default. This model can be directly used for inference.
    • Tuning result file: After the tuning is complete, a file named aoe_result_opat_${timestamp}_${pidxxx}.json is generated in the working directory where the tuning is performed. This file records the information about the tuned operators. For details about the fields in this file, see Table 1.