Remote Operator Deployment

Deploy the custom OPP runfile custom_opp_Linux_Arch.run to the OPP of the hardware environment where the Ascend AI Processor is deployed to construct necessary conditions for subsequent operator running on the network.

  1. On the MindStudio project page, select the operator project.
  2. On the top menu bar, choose Ascend > Operator Deployment. The operator deployment dialog box is displayed.

    Select configuration options on the Operator Deploy Remotely > Deployment page. For details about how to configure Deployment, see Deployment.

  3. Configure environment variables.
    The two configuration methods are as follows:
    • Configure environment variables on the host in the hardware environment where the Ascend AI Processors are deployed.

      Deploy the operator on the host as the running user in MindStudio. Before deploying the operator, ensure the following environment variable is configured on the host.

      1. Run the following command in the $HOME/.bashrc file on the host as the running user:
        export ASCEND_OPP_PATH=Ascend-CANN-Toolkit installation directory/ascend-toolkit/latest/opp

        Ascend-CANN-Toolkit installation directory/ascend-toolkit/latest is the OPP installation path. Replace it with the actual path.

      2. Run the following command to make the environment variable to take effect:

        source ~/.bashrc

    • Add the environment variable in Environment Variables.

      Type ASCEND_OPP_PATH=Ascend-CANN-Toolkit installation directory/ascend-toolkit/latest/opp in the Environment Variables field.

      Ascend-CANN-Toolkit installation directory/ascend-toolkit/latest is the OPP installation path. Replace it with the actual path.

      You can also click the icon next to the text box and enter a value in the displayed dialog box.

      • Type ASCEND_OPP_PATH in the Name field.
      • Enter the environment variable value Ascend-CANN-Toolkit installation directory/ascend-toolkit/latest/opp in the Value text box.
  4. Select the specified OPP.

    In Operator Package, select the specified OPP directory.

  5. Select the target server for operator deployment and click Operator deploy.
  6. Deploy the operators. Operator deployment is equivalent to installing the custom OPP generated in Operator Project Build. After the deployment, the operator is deployed in the OPP installation path on the host. The default path is /usr/local/Ascend/opp/.
    Figure 1 Operator deployment log messages

    After the custom OPP is deployed on the host, the directory structure is similar to the following:

    ├── opp      // OPP directory
    │   ├── vendors    // Directory of custom operators
    │       ├── config.ini     // Priority configuration file of custom operators
    │       ├── vendor_name1   // Custom operator deployed by the storage vendor. The vendor_name is configured during the build of the custom operator installation package. If vendor_name is not configured, the default value customize is used.
    │           ├── op_impl
    │               ├── ai_core
    │                   ├── tbe
    │                       ├── config
    │                           ├── aic_ops_info.json      // Custom operator information library file
    │                       ├── vendor_name1_impl          // Custom operator implementation code file
    │                           ├── add.py
    │               ├── vector_core     // Reserved directory, which can be ignored
    │           ├── framework
    │               ├── caffe        // Directory of the plugin library of custom Caffe operators
    │               ├── onnx         // Directory of the plugin library of custom ONNX operators
    │               ├── tensorflow          // Directory of the plugin library of custom TensorFlow operators
    │                   ├── libcust_tf_parsers.so
    │           ├── op_proto
    │               ├── libcust_op_proto.so    // Prototype library file of the custom operator