AI CPU Introduction
AI CPU executes CPU operators (including control, scalar, and vector operators) on Ascend AI Processor. The following figure shows its context in the solution architecture.
Figure 1 System architecture


The following components are involved in building and executing AI CPU operators:
- Graph Engine (GE): a unified IR API based on the Ascend AI Software Stack for popular machine learning frameworks, such as TensorFlow and PyTorch. GE implements the preparation, partition, optimization, building, loading, execution, and management of the network model topology, or the graph.
- AI CPU Engine: interfaces with GE, provides the AI CPU operator information library, and implements operator registration, operator memory allocation calculation, subgraph optimization, and task generation.
- AI CPU Schedule: works with the Task Schedule to schedule and execute NN models.
- AI CPU Processor: completes operator computations. And it provides the operator implementation library for implementing the execution of AI CPU operators.
- Data Processor: preprocesses data of training samples in training scenarios.
Parent topic: Background Knowledge