OPP Deployment

Introduction

Operator deployment refers to deploying the built custom OPP (.run) to the CANN operator library.

Deploy the custom OPP to the CANN operator library of the development environment in the inference scenario.

Deploy the custom OPP to the CANN operator library of the operating environment in the training scenario.

The OPP must be built and deployed in the same OS version and architecture.

In MindStudio

For details about how to deploy an OPP developed in MindStudio, see "Operator Development" > "Operator Deployment" in MindStudio IDE User Guide .

In Command Lines

  1. In the training scenario, copy the custom OPP runfile custom_opp_<target os>_<target architecture>.run to any directory in the operating environment as the running user and deploy the runfile by following the steps below. If your development environment and operating environment are set up on the same server, skip this step.
  2. In the directory of the custom OPP, run the following command to install the OPP:

    ./custom_opp_<target os>_<target architecture>.run --install-path=<path>

    • The --install-path parameter is optional and is used to specify the installation directory of the custom OPP. The running user must have the read and write permissions on the specified installation directory. In the following description, <vendor_name> indicates the value of the vendor_name field in the build.sh script during operator project building. The default value is customize.
      • In the default installation scenario, the --install-path parameter is not set. After the installation is successful, the custom operator files generated after compilation are deployed to the ${INSTALL_DIR}/opp/vendors/<vendor_name> directory. Replace ${INSTALL_DIR} with the actual CANN directory. For example, if the Ascend-CANN-Toolkit package is installed, the file path is $HOME/Ascend/ascend-toolkit/latest.

        The permission on the default installation path ${INSTALL_DIR}/opp/vendors of the custom operator package is related to the installation user and configuration of the CANN package. If the custom operator package fails to be installed due to insufficient permission, use the --install-path parameter and configure the environment variable ASCEND_CUSTOM_OPP_PATH to specify the installation directory (for details, see the installation scenario with a specified directory), or contact the installation user of the CANN package to modify the permission on the vendors directory.

      • In an installation scenario with a specified directory, configure the --install-path parameter. After the installation is successful, the custom operator files generated after building are deployed to the <path>/<vendor_name> directory, and set_env.bash is added to the <path>/<vendor_name>/bin directory for setting environment variables related to the custom OPP.
        • If the installation directory of the OPP is specified by configuring the --install-path parameter, run the source <path>/<vendor_name>/bin/set_env.bash command before using the custom operator. In the set_env.bash script, add the installation path of the custom OPP to the environment variable ASCEND_CUSTOM_OPP_PATH for the custom operator to take effect in the current environment.
        • An OPP generated by building an operator project sample can be installed in a specified absolute path or relative path.
        • An OPP generated by building an operator project created in msopgen can be installed in a specified absolute path.
        • An OPP generated by building an operator project created in MindStudio cannot be installed in a specified directory.
    • If a custom operator with the same vendor_name already exists in the same installation directory, information similar to the following is displayed (the "upgrade framework" operator is used as an example):
      [ops_custom]upgrade framework
      caffe onnx tensorflow [INFO]: has old version in /usr/local/Ascend/latest/opp_x86_64-linux/vendors/customize/framework:
      - Overlay Installation , please enter:[o]
      - Replace directory installation , please enter: [r]
      - Do not install , please enter:[n]
      • Enter o to overwrite the installation. That is, if the installation package contains a file whose name is the same as that of an existing file, the existing file will be replaced. If the installation package has no such file, the existing file will be retained.
      • Enter r for new installation with the installation package. That is, all files in the installation directory will be deleted.
      • Enter n to exit the installation.

      Note: If the installation modes of the op proto, op impl, or custom.proto file exist, enter the corresponding character as prompted.

    Taking the default installation scenario as an example, the directory structure after the deployment is as follows:

    ├── opp    // Operator library directory
    │   ├── vendors    // Directory of custom operators
    │       ├── config.ini     // Priority configuration file of custom operators
    │       ├── vendor_name1   // Custom operator deployed by the storage vendor. vendor_name is configured during the build of the custom operator installation package. If vendor_name is not configured, the default value customize is used.
    │           ├── op_impl
    │               ├── ai_core    // Directory of the TBE custom operator implementation files and operator information library
    │                   ├── tbe      
    │                       ├── config
    │                           ├── ${soc_version}     // Version of Ascend AI Processor
    │                               ├── aic-${soc_version}-ops-info.json     // TBE custom operator information library file
    │                       ├── vendor_name1_impl               // TBE custom operator implementation code
    │                           ├── xx.py
    │               ├── cpu          // Directory of AI CPU custom operator implementation library and information library files
    │                   ├── aicpu_kernel
    │                       ├── impl
    │                           ├── libcust_aicpu_kernels.so    // AI CPU custom operator implementation library
    │                   ├── config   
    │                       ├── cust_aicpu_kernel.json          // AI CPU custom operator information library file
    │               ├── vector_core   // Reserved directory, which can be ignored
    │           ├── framework
    │               ├── caffe       // Directory of Caffe custom operator plugin library file
    │                   ├── libcust_caffe_parsers.so      // Operator plugin library file, including the parsing functions of custom operator plugin
    │                   ├── custom.proto  // Original definition file of the custom operator. This file is read during the operator building to obtain the operator original definition.
    │               ├── onnx       // Directory of ONNX custom operator plugin library file
    │                   ├── libcust_onnx_parsers.so      // Operator plugin library file, including the plugin parsing function of the custom operator
    │               ├── tensorflow         // Directory for storing the plugin library of the TensorFlow custom operator and the configuration file for configuring the NPU's support for the custom operator
    │                   ├── libcust_tf_parsers.so         // Operator plugin library file
    │                   ├── libcust_tf_scope_fusion.so    // Scope fusion pattern definition library file
    │                   ├── npu_supported_ops.json  // File used by Atlas Training Series Product
    │           ├── op_proto        // Directory of custom operator prototype library
    │               ├── libcust_op_proto.so   
    │       ├── vendor_name2   // Custom operator deployed by storage vendor vendor_name2

    Note: You do not need to pay attention to other directories and files.

  3. Configure the priorities of custom operators.
    If multiple OPPs coexist and custom operators with the same OpType exist in different OPP directories, the operator in the OPP directory with the highest priority is used. The following describes how to configure the OPP priority.
    • Default installation scenario

      If custom operators of multiple vendors exist in the opp/vendors directory, you can configure the priority of the custom OPPs by configuring the config.ini file in the opp/vendors directory.

      The following provides a configuration template of config.ini.

      load_priority=vendor_name1,vendor_name2,vendor_name3
      • load_priority: keyword of priority sequence configuration, which cannot be modified.
      • vendor_name1,vendor_name2,vendor_name3: priority sequence of custom operator vendors, which is arranged in descending order of priority.
    • Installation in a specified directory

      In an installation scenario with a specified directory, if multiple custom OPPs need to take effect, run the set_env.bash script in the installation directory of each OPP. Each time the script is executed, the installation directory of the current OPP is added to the beginning of the ASCEND_CUSTOM_OPP_PATH environment variable. Therefore, the priority can be determined based on the script execution sequence. A later script execution sequence indicates a higher priority of the corresponding OPP.

      For example, if source <path>/vendor_name1/bin/set_env.bash is executed before source <path>/vendor_name2/bin/set_env.bash, the priority of the vendor_name2 OPP is higher than that of the vendor_name1 OPP. The following is an example:

      ASCEND_CUSTOM_OPP_PATH=<path>/vendor_name2:<path>/vendor_name1:
    • The priority of the OPP installed in a specified directory is higher than that of the OPP installed in the default mode.