Quick Start
This section describes how to quickly perform AOE tuning in TensorFlow-based training scenarios. The following uses operator tuning as an example. If subgraph tuning is required, change the value of AOE_MODE to 1 when configuring the environment variables.
Prerequisites
- Related software has been installed. For details, see Environment Setup.
- Required environment variables have been configured. For details, see Environment Variable Configuration.
Procedure
- Run the training script to perform auto tuning based on the specified tuning mode.
- View the tuning result.The key log information about tuning in the training process is as follows:
# Enable TFAdapter tuning. in tune mode, training graph handled by tools # Start the tool for tuning. Aoe tuning graph.
After the tuning is complete, the following files are generated:
- Custom repository: If the conditions for generating a custom repository are met (see Figure 3), a custom repository is generated. The generated custom repository is stored in the ${HOME}/Ascend/latest/data/aoe/custom/op/${soc_version} directory by default. For details about how to use the tuned custom repository, see Usage of Tuned Custom Repositories.
- Tuning result file: After the tuning is complete, a file named aoe_result_opat_${timestamp}_${pidxxx}.json is generated in the working directory where the tuning is performed. This file records the information about the tuned operators. For details about the fields in this file, see Table 2.
Parent topic: Online Tuning in TensorFlow-based Training Scenarios