AI CPU Operator Development Workflow

The workflows of developing AI CPU operators in MindStudio IDE are as follows:
  • Figure 1 shows the workflow of developing a TensorFlow/ONNX/PyTorch AI CPU operator.
  • Figure 2 shows the workflow of developing a MindSpore AI CPU operator.
Figure 1 TensorFlow/ONNX/PyTorch AI CPU operator development workflow
Figure 2 MindSpore AI CPU operator development workflow
  1. Operator Analysis: Determine the operator functionality, inputs, and outputs, select an operator development mode, and name the operator type and implementation function.
  2. Operator Project Creation: Create an AI CPU operator project in MindStudio IDE. After that, the operator project directory and corresponding file templates are automatically generated. You can develop operators based on these templates.
  3. Operator Development:
    • Operator Code Implementation: Implement the compute logic of the operator.
    • Operator Prototype Definition: Define the constraints on the operator to run on the Ascend AI Processors, mainly the mathematical meanings of the operator by defining operator inputs, outputs, attributes, and their value ranges, verifying arguments, and inferring shape. The information defined by the prototype is registered with the operator prototype library of GE. During offline model conversion, GE calls the verification API of the operator prototype library to verify operator arguments. If the verification passes, GE infers the output shape and dtype of each node by calling the inference function of the operator prototype library and allocates static memory for the result tensor.
    • Operator Information Library Definition: Register the operator information with the operator information library, including the OpType, and input and output dtype and name. During network execution, AI CPU Engine performs basic verification and operator matching based on the operator information in the operator information library.
    • Operator Plugin Implementation: In the custom operator development scenario based on a third-party framework (such as TensorFlow/ONNX), after developing implementation code of the custom operator, you need to develop a plugin to map the third-party operator to an operator supported by the the Ascend AI Processor.
  4. UT: Verify the operator implementation logic in a simulation environment, including implementation code of the operator logic and the operator prototype definition.
  5. Operator Build: Build the operator plugin implementation file into an operator plugin. Build the prototype definition file into an operator prototype library. Build the information library definition file into an operator information library.
  6. Operator Deployment: Deploy the operator implementation file, and the built operator plugin, operator prototype library, and information library definition file to the Ascend AI Processor OPP (corresponding directories under the opp directory).
  7. PyTorch Adaptation: Adapt PyTorch operators through extension of the Ascend AI Processor, and the Ascend AI Processor serves the function of memory management, device management, and operator call implementation.
  8. ST: Verify the functionality of operator implementation code in actual hardware with automatically generated test cases.