Operator Development Scenarios

The CANN operator library contains various high-performance operators, improving the running performance of neural networks. The operators in the CANN operator library are implemented and built in advance. They are highly optimized kernel functions developed by Huawei engineers using Ascend AI Processor architecture-specific programming languages, and can better adapt to the underlying hardware architecture and deliver high performance.

Generally, you need to develop custom operators only in the following scenarios:

  • Training: There are some unsupported operators when the network training script of a third-party framework (such as TensorFlow and PyTorch) is migrated to Ascend AI Processor.
  • Inference: There are some unsupported operators when a third-party framework model (such as TensorFlow, Caffe, and ONNX) is converted into an offline model that adapts to Ascend AI Processor using the ATC tool.
  • Network tuning: Some operator has low performance and affects network performance. Such an operator needs to be replaced with a high-performance one.
  • Inference: Some logic in an application involves mathematical operations (such as searching for the maximum value and converting data types). You can call custom operators in the application to accelerate these operations. The custom operators run on the AI Processor, improving performance.

    For example, for a classification application, you can customize an operator (for example, ArgMax) to search for the top 5 indexes with the highest possibilities from the inference result of the classification model. You can directly call this operator using an AscendCL API to postprocess the inference result.