Remote Deployment
Deploy the custom operator installation package custom_opp_Linux_Arch.run to the OPP of the hardware environment where Ascend AI Processor is deployed to construct necessary conditions for subsequent operator running on the network.
- On the MindStudio project page, select the operator project.
- On the top menu bar, choose . The operator deployment dialog box is displayed.
Select configuration options on the page. For details about how to configure Deployment, see Ascend Deployment.
- Configure environment variables in either of the following ways:
- Configure environment variables on the host in the hardware environment where the Ascend AI Processor is deployed.
Deploy the operator on the host as the running user in MindStudio. Before deploying the operator, ensure the following environment variable is configured on the host.
- Run the following command in the $HOME/.bashrc file on the host as the running user:
export ASCEND_OPP_PATH=Ascend-CANN-Toolkit_installation_directory/ascend-toolkit/latest/oppAscend-CANN-Toolkit_installation_directory/ascend-toolkit/latest is the OPP installation path. Replace it with the actual path.
- Run the following command to make the environment variable take effect:
source ~/.bashrc
- Run the following command in the $HOME/.bashrc file on the host as the running user:
- Add the environment variable in Environment Variables.
Type ASCEND_OPP_PATH=Ascend-CANN-Toolkit_installation_directory/ascend-toolkit/latest/opp in the Environment Variables field.
Ascend-CANN-Toolkit_installation_directory/ascend-toolkit/latest is the OPP installation path. Replace it with the actual path.
You can also click the icon next to the text box and enter a value in the displayed dialog box.
- Type ASCEND_OPP_PATH in the Name field.
- Enter the environment variable value Ascend-CANN-Toolkit_installation_directory/ascend-toolkit/latest/opp in the Value text box.
- Configure environment variables on the host in the hardware environment where the Ascend AI Processor is deployed.
- Select the specified OPP runfile.
In Operator Package, select the specified OPP directory.
- Select the target server for operator deployment and click Operator deploy.
- Deploy the operators. Operator deployment is equivalent to installing the custom operator installation package generated in Operator Project Build. After the deployment, the operator is deployed in the OPP installation path on the host. The default path is /usr/local/Ascend/opp/.Figure 1 Operator deployment log messages

After the custom operator is deployed on the host, the directory structure is similar to the following:
├── opp // OPP directory │ ├── vendors // Directory of custom operators │ ├── config.ini // Priority configuration file of custom operators │ ├── vendor_name1 // Custom operator deployed by the storage vendor. The vendor_name is configured during the build of the custom operator installation package. If vendor_name is not configured, the default value customize is used. │ ├── op_impl │ ├── ai_core │ ├── tbe │ ├── config │ ├── aic_ops_info.json // Custom operator information library file │ ├── vendor_name1_impl // Custom operator implementation code file │ ├── add.py │ ├── cpu │ ├── aicpu_kernel/ │ ├── vendor_name1_impl // Custom operator implementation code file │ ├── libcust_aicpu_kernels.so │ ├── config │ ├── cust_aicpu_kernel.json // Custom operator information library file │ ├── vector_core // Reserved directory, which can be ignored │ ├── framework │ ├── caffe // Directory of the plugin library of custom Caffe operators │ ├── onnx // Directory of the plugin library of custom ONNX operators │ ├── tensorflow // Directory of the plugin library of custom TensorFlow operators │ ├── libcust_tf_parsers.so │ ├── npu_supported_ops.json // File used by Atlas training products │ ├── op_proto │ ├── libcust_op_proto.so // Prototype library file of the custom operator