TBE Operator Development Workflow
The following describes the flow of developing a TBE operator in MindStudio IDE:
- Figure 1 shows the workflow of developing a TensorFlow/ONNX/Caffe TBE operator.
- Figure 2 shows the process of developing a PyTorch TBE operator.
- Figure 3 shows the process of developing a MindSpore TBE operator.
- Operator Analysis: Determine the operator functionality, inputs, and outputs, select an operator development mode, and name the operator type and implementation function.
- Operator Project Creation: Create a TBE operator project in MindStudio IDE. After that, the operator project directory and corresponding file templates are automatically generated. You can develop operators based on these templates.
- Operator Development:
- Operator Code Implementation: Describe the implementation process of an operator.
- Operator Prototype Definition: Define the constraints on the operator to run on the Ascend AI Processors, mainly by defining operator inputs, outputs, attributes, and their value ranges, verifying arguments, and inferring shapes. The information defined by the prototype is registered with the operator prototype library of GE. During network execution, GE calls the verification API of the operator prototype library to verify operator arguments. If the verification passes, GE infers the output shape and dtype of each node by calling the inference function of the operator prototype library and allocates static memory for the result tensor.
- Operator Information Library Definition: Register the operator information with the operator information library, including the input and output dtype and format, and input shape of the operator. During network execution, FE performs basic verification based on the operator information in the operator information library, and determines whether to insert a proper conversion node for the operator. It also finds the corresponding operator implementation file based on the information in the operator information library and builds the operator binary file for execution.
- Operator plugin implementation: Develop plugins for operators developed based on a third-party framework (TensorFlow, ONNX, or Caffe) to map them to operators that adapt to the Ascend AI Processor and register the operator information with GE. When a third-party framework is used, the plugin information in the GE is loaded and called first to parse and map the operators running on the third-party framework network to the operators supported by the Ascend AI Processor.
- UT: Verify the operator implementation logic in a simulation environment, including implementation code of the operator logic and the operator prototype definition.
- Operator Build: Build the operator plugin implementation file into an operator plugin. Build the operator prototype definition file into an operator prototype library. Build the operator information library definition file into an operator information library.
- Operator Deployment: Deploy the operator implementation file, built operator plugin, operator prototype library, and operator information library in the Ascend AI Processor OPP for running the operator on the network.
- PyTorch Adaptation: Adapt PyTorch operators through extension of the Ascend AI Processor, and the Ascend AI Processor serves the function of memory management, device management, and operator call implementation.
- ST: Verify the functionality of operator implementation code in actual hardware with automatically generated test cases.
Parent topic: Development Workflow


