Development from Scratch
Context
If you cannot find a certain operator in the CANN operator library, you may need to develop a custom operator and then perform framework-based adaptation. You can look up operators in Operator Acceleration Library API Reference.
If your custom operators are used only to construct an Ascend graph or for single-operator execution using AscendCL, you can save the adaptation workload (developing an operator plugin as shown in the following workflow).
Development Workflow
The workflows of developing a custom operator using MindStudio and the CLI are almost the same.
The workflow of development from scratch is as follows.


No. |
Action |
Description |
See Also |
|---|---|---|---|
1 |
Development mode selection |
Analyze operators and determine the operator development mode (TBE DSL, TBE TIK, or AI CPU). |
|
2 |
Environment setup |
Set up the development and operating environments required for operator development, execution, and verification. |
|
3 |
Project creation |
Create an operator development project in either of the following ways:
|
|
4 |
Prototype definition |
Implement the operator prototype definition file, which specifies the constraints on an operator that runs on Ascend AI Processor, mainly reflecting the mathematical meanings of the operator. An operator prototype file defines the operator inputs, outputs, attributes, and value ranges, and can be used to verify arguments and infer the shape. The defined prototype is registered to GE's operator prototype library. To generate an offline model, GE calls the verification API of the operator prototype library to verify operator arguments. If the verification passes, GE infers the output shape and dtype of each node by using the inference function of the operator prototype library and allocates static memory for the result tensor. |
|
5 |
Code implementation |
Operator Code Implementation (TBE DSL) |
|
6 |
Information library definition |
Implement the operator information library file, which registers the operator information to the operator information library, including input and output data types, formats, and input shape of the operator. During offline model conversion, FE performs basic verification based on the operator information in the operator information library and inserts a proper transformation node for the operator as needed. It also finds the corresponding operator implementation file based on the operator information library and builds the operator binary file for execution. |
|
7 |
Operator UT |
Perform UT to test the operator implementation logic and the logic of the operator prototype definition. Currently, UT is only supported in operator development using MindStudio. |
|
8 |
Operator plugin development |
To develop a custom operator in a third-party framework (such as TensorFlow or Caffe), develop a plugin capable of mapping an operator developed based on a third-party framework to one supported by Ascend AI Processor. |
|
9 |
Build and deployment |
Build the custom operator project to generate a custom OPP, install the OPP, and deploy the custom operators to the operator library. |
|
10 |
Operator ST |
Perform ST to test the operator functionality in real-device environment. |
|
11 |
Operator verification on network |
Load the custom operator to a network model and execute it for verification. |