Environment Setup
Environment Setup
- Install the development kit. For details, see Installing the Toolkit Development Kit.
- Configure the environment variable.After the CANN software is installed, when you build and run your application as the CANN running user, log in to the environment as the CANN running user and run the source ${install_path}/set_env.sh command to set environment variables. {install_path} indicates the CANN installation path, for example, /usr/local/Ascend/ascend-toolkit.
The preceding environment variables take effect only in the current window. You can write the preceding commands to the ~/.bashrc file to make them take effect permanently. The operations are as follows:
- Run the vi ~/.bashrc command in any directory as the installation user to open the .bashrc file and append the preceding information to the file.
- Run the :wq! command to save the file and exit.
- Run the source ~/.bashrc command for the environment variable to take effect:
Restrictions
- The tools support the analysis and migration of training scripts of PyTorch 1.11.0, 2.1.0, and 2.2.0.
- The original scripts can be successfully executed in the GPU environment and based on Python 3.7 or later.
- The execution logic after script migration must be the same as that before migration.
- If the source code calls third-party libraries, adaptation issues may occur during the migration. Before migrating the original code, you need to install the third-party library version that has been adapted to Ascend based on the called third-party library. For details about the adapted third-party library and user guide, see Ascend Extension for PyTorch Suites and Third-Party Libraries .
- The FusedAdam optimizer used in Apex does not support automatic migration and PyTorch GPU2Ascend tool-based migration. If the source code contains this optimizer, modify it as required.
- The current analysis tool does not support affinity API analysis for the native functions self.dropout(), nn.functional.softmax(), torch.add, def bboexs_diou(), def bboexs_giou(), class LabelSmoothingCrossEntropy(), and ColorJitter. If the original training script involves the preceding native functions, see "PyTorch x.x.x > Ascend Extension for PyTorch Custom API > torch_npu.contrib" in Ascend Extension for PyTorch API Reference . Analyze and replace the node.
- If the user training script contains the amp_C module that is not supported by the NPU platform, you need to manually delete the code related to import amp_C before training.
- The platform of the migrated script is different from that of the original script. Therefore, during the debugging and running of the migrated script, an exception may be thrown due to such causes as operator differences and the process is terminated. This type of exception needs to be further debugged and resolved based on the specific exception information.