Local Deployment
- This section describes how to deploy the custom operator installation package. During the execution of ST, an operator installation package is automatically generated and deployed in the opp directory in the development environment. You can change the deployment path by referring to this section. If you have run operator ST by referring to ST, you can skip this section.
- Before deploying operators, ensure that the custom operator installation package custom_opp_Linux_Arch.run is generated by referring to Operator Project Build.
- Windows OSs do not support local operator deployment.
- On the menu bar of the MindStudio project page, choose .
- Select Operator Deploy Locally. In Operator Package, select the specified OPP directory and click Operator deploy.
If the following information is displayed on the Output tab page, the custom operator is successfully deployed.

After the custom OPP is successfully installed, the custom operator is deployed in the /opp directory in the Ascend-CANN-Toolkit_installation_directory/ascend-toolkit/latest.
The directory structure is as follows:
├── opp // OPP directory │ ├── vendors // Directory of custom operators │ ├── config.ini // Priority configuration file of custom operators │ ├── vendor_name1 // Custom operator deployed by the storage vendor. The vendor_name is configured during the build of the custom operator installation package. If vendor_name is not configured, the default value customize is used. │ ├── op_impl │ ├── ai_core // Directory of the TBE operator implementation file and operator information library │ ├── tbe │ ├── config │ ├── soc_version // Version of the Ascend AI Processor │ ├── aic-soc_version-ops-info.json // TBE operator information library file │ ├── impl // TBE operator implementation code file │ ├── add.py │ ├── cpu // Directory of the AI CPU operator implementation file and operator information library │ ├── aicpu_kernel/ │ ├── impl // AI CPU operator implementation code file │ ├── libcust_aicpu_kernels.so │ ├── config │ ├── cust_aicpu_kernel.json // AI CPU operator information library file │ ├── vector_core // Reserved directory, which can be ignored │ ├── framework │ ├── caffe // Directory of the plugin library of custom Caffe operators │ ├── libcust_caffe_parsers.so │ ├── custom.proto │ ├── onnx // Directory of the plugin library of custom ONNX operators │ ├── libcust_onnx_parsers.so │ ├── tensorflow // Directory of the plugin library of custom TensorFlow operators and the configuration file for configuring the NPU's support for custom operators │ ├── libcust_tf_parsers.so │ ├── libcust_tf_scope_fusion.so // Scope fusion pattern definition library file │ ├── npu_supported_ops.json // File used by Atlas training products │ ├── op_proto // Directory of custom operator prototype library │ ├── libcust_op_proto.so │ ├── vendor_name2 // Custom operator deployed by storage vendor vendor_name2
Parent topic: Operator Deployment