Development from Scratch

Context

If you cannot find a certain operator in the CANN operator library, you may need to develop a custom operator and then perform framework-based adaptation. You can look up operators in Operator Acceleration Library API Reference.

If your custom operators are used only to construct an Ascend graph or for single-operator execution using AscendCL, you can save the adaptation workload (developing an operator plugin as shown in the following workflow).

Development Workflow

The workflows of developing a custom operator using MindStudio and the CLI are almost the same.

The workflow of development from scratch is as follows.

Figure 1 Workflow of developing an operator using MindStudio
Figure 2 Workflow of developing an operator using the CLI
Table 1 Operator development workflow

No.

Action

Description

See Also

1

Development mode selection

Analyze operators and determine the operator development mode (TBE DSL, TBE TIK, or AI CPU).

Development Mode Selection

2

Environment setup

Set up the development and operating environments required for operator development, execution, and verification.

Environment Setup

3

Project creation

Create an operator development project in either of the following ways:
  • In MindStudio: Use MindStudio to create an operator project directly. The operator project and code template are automatically generated.
  • Using the msopgen tool: This mode will automatically generate an operator project and code template.
  • Based on the custom operator sample project: You need to create operator implementation files from scratch or by manually tweaking existing samples.

Project Creation

4

Prototype definition

Implement the operator prototype definition file, which specifies the constraints on an operator that runs on Ascend AI Processor, mainly reflecting the mathematical meanings of the operator. An operator prototype file defines the operator inputs, outputs, attributes, and value ranges, and can be used to verify arguments and infer the shape. The defined prototype is registered to GE's operator prototype library. To generate an offline model, GE calls the verification API of the operator prototype library to verify operator arguments. If the verification passes, GE infers the output shape and dtype of each node by using the inference function of the operator prototype library and allocates static memory for the result tensor.

Operator Prototype Definition

5

Code implementation

  • For TBE operators, implement their compute logic and scheduling.

    Python files are used for TBE operator implementation. The TBE DSL and TBE TIK modes are supported.

  • For AI CPU operators, define operator classes and implement operator computation.

    C++ files are used for AI CPU operator implementation.

Operator Code Implementation (TBE DSL)

Operator Code Implementation (TBE TIK)

Operator Code Implementation (AI CPU)

6

Information library definition

Implement the operator information library file, which registers the operator information to the operator information library, including input and output data types, formats, and input shape of the operator. During offline model conversion, FE performs basic verification based on the operator information in the operator information library and inserts a proper transformation node for the operator as needed. It also finds the corresponding operator implementation file based on the operator information library and builds the operator binary file for execution.

TBE Operator Information Library

7

Operator UT

Perform UT to test the operator implementation logic and the logic of the operator prototype definition.

Currently, UT is only supported in operator development using MindStudio.

Operator UT

8

Operator plugin development

To develop a custom operator in a third-party framework (such as TensorFlow or Caffe), develop a plugin capable of mapping an operator developed based on a third-party framework to one supported by Ascend AI Processor.

Operator Adaptation

9

Build and deployment

Build the custom operator project to generate a custom OPP, install the OPP, and deploy the custom operators to the operator library.

Operator Building and Deployment

10

Operator ST

Perform ST to test the operator functionality in real-device environment.

Operator ST

11

Operator verification on network

Load the custom operator to a network model and execute it for verification.

Operator Verification on Network