Getting Started

This chapter describes how to quickly convert models on different frameworks by using samples.

  • Version compatibility:
    • The OM offline model converted from the CANN software package of an earlier version can run in the environment of the CANN software package of a later version. The compatibility period is one year.
    • In the dynamic shape scenario and static Ascend virtual instance scenario, if the CANN version earlier than 6.0.1 is used for model conversion, inference cannot be performed in CANN 6.0.1 or later. In this case, you need to use CANN 6.0.1 or later for model conversion again. For details about the basic version of the ATC tool used by an existing offline model, see Software Version Query.
  • If your model contains custom operators, develop and deploy custom operators by referring to TBE&AI CPU Operator Developer Guide. During model conversion, the custom OPP will be preferentially looked up for operators in the user model, rather than the built-in OPP.
  • Certain Caffe models (such as Faster R-CNN, YOLOv3, YOLOv2, and SSD) contain operators that are not defined in the original Caffe framework, namely ROIPooling, Normalize, PSROI Pooling, and Upsample. To support these models on the Ascend AI Processor, customization is needed to reduce the complexities of custom operator development and postprocessing programming. For details, see Custom Caffe Network Modification.
  • Original TensorFlow models containing control flow operators are not directly convertible by ATC. You need to convert such a model with control flow operators to an intermediate model with function operators, and then convert the intermediate model to an offline model adapted to the Ascend AI Processor using ATC. For details, see Custom Network Modification (TensorFlow).

Converting an Open-Source TensorFlow Model to an Offline Model

  1. Obtain a TensorFlow model.

    Click here to download the *.pb model file of the ResNet-50 network and upload the file to any directory in the development environment as the CANN running user, for example, $HOME/module/.

  2. Run the following command (the path and file arguments in the command are for reference only).
    atc --model=$HOME/module/resnet50_tensorflow*.pb --framework=3 --output=$HOME/module/out/tf_resnet50 --soc_version=<soc_version>   

    For details about the command-line options, see Command Line Options. Set <soc_version> to the actual SoC name for model conversion before running inference. Refer to --soc_version to query the SoC details.

  3. You should see information similar to the following if the conversion is successful. If it fails, refer to Troubleshooting to locate the fault.
    1
    ATC run success
    

    Find the generated offline model (for example, tf_resnet50.om) in the directory specified by the --output argument.

    If model build fails due to data type incompatibility of an AI CPU operator, you can enable the Auto Cast feature (the Cast operator is automatically inserted to cast the data type to a supported one). For details, see How Do I Enable Auto Cast for AI CPU Operators?.

  4. (Follow-up) If you want to run inference on the generated offline model, prepare the environment, OM model file, and input data in .bin format that meets the model input requirements, and then run the msame tool. Click here to obtain the msame tool, and refer to the readme file for instructions.

Converting an ONNX Model to an Offline Model

  1. Obtain an ONNX model.

    Click here to go to the ModelZoo page and obtain the .onnx model file by referring to "Getting Started" > "Model Inference" in README.md. Then upload the .onnx model file to any directory (for example, $HOME/module/) of the development environment as the CANN running user.

  2. Run the following command (the path and file arguments in the command are for reference only).
    atc --model=$HOME/module/resnet50*.onnx --framework=5 --output=$HOME/module/out/onnx_resnet50 --soc_version=<soc_version>  

    For details about the command-line options, see Command Line Options. Set <soc_version> to the actual SoC name for model conversion before running inference. Refer to --soc_version to query the SoC details.

  3. You should see information similar to the following if the conversion is successful. If it fails, refer to Troubleshooting to locate the fault.
    1
    ATC run success
    

    Find the generated offline model (for example, onnx_resnet50.om) in the path specified by the --output argument.

    If model build fails due to data type incompatibility of an AI CPU operator, you can enable the Auto Cast feature (the Cast operator is automatically inserted to cast the data type to a supported one). For details, see How Do I Enable Auto Cast for AI CPU Operators?.

  4. (Follow-up) If you want to run inference on the generated offline model, prepare the environment, OM model file, and input data in .bin format that meets the model input requirements, and then run the msame tool. Click here to obtain the msame tool, and refer to the readme file for instructions.

Converting an Open-Source Caffe Model to an Offline Model

  1. Obtain a Caffe network model.

    Download the .prototxt model file and .caffemodel weight file of the ResNet-50 network and upload the files to any directory in the development environment as the CANN running user, for example, $HOME/module/.

    • ResNet-50 network model file (*.prototxt): Click here to download the file.
    • ResNet-50 weight file (*.caffemodel): Click here to download the file.
  2. Run the following command (the path and file arguments in the command are for reference only).
    atc --model=$HOME/module/resnet50.prototxt --weight=$HOME/module/resnet50.caffemodel --framework=0 --output=$HOME/module/out/caffe_resnet50 --soc_version=<soc_version>  

    For details about the command-line options, see Command Line Options. Set <soc_version> to the actual SoC name for model conversion before running inference. Refer to --soc_version to query the SoC details.

  3. You should see information similar to the following if the conversion is successful. If it fails, refer to Troubleshooting to locate the fault.
    1
    ATC run success
    

    Find the generated offline model (for example, caffe_resnet50.om) in the path specified by the --output argument.

    If model build fails due to data type incompatibility of an AI CPU operator, you can enable the Auto Cast feature (the Cast operator is automatically inserted to cast the data type to a supported one). For details, see How Do I Enable Auto Cast for AI CPU Operators?.

  4. (Follow-up) If you want to run inference on the generated offline model, prepare the environment, OM model file, and input data in .bin format that meets the model input requirements, and then run the msame tool. Click here to obtain the msame tool, and refer to the readme file for instructions.

Converting a MindSpore Model to an Offline Model

  1. Obtain a MindSpore network model.

    Click here to download the .air model file of the ResNet-50 network and upload the file to any directory (for example, $HOME/module/) in the development environment as the CANN running user.

  2. Run the following command (the path and file arguments in the command are for reference only).
    atc --model=$HOME/module/ResNet50.air --framework=1 --output=$HOME/module/out/ResNet50_mindspore --soc_version=<soc_version>

    For details about the command-line options, see Command Line Options. Set <soc_version> to the actual SoC name for model conversion before running inference. Refer to --soc_version to query the SoC details.

  3. You should see information similar to the following if the conversion is successful. If it fails, refer to Troubleshooting to locate the fault.
    1
    ATC run success
    

    Find the generated offline model (for example, ResNet50_mindspore.om) in the path specified by the --output argument.

    If model build fails due to data type incompatibility of an AI CPU operator, you can enable the Auto Cast feature (the Cast operator is automatically inserted to cast the data type to a supported one). For details, see How Do I Enable Auto Cast for AI CPU Operators?.

  4. (Follow-up) If you want to run inference on the generated offline model, prepare the environment, OM model file, and input data in .bin format that meets the model input requirements, and then run the msame tool. Click here to obtain the msame tool, and refer to the readme file for instructions.