Independent Building of Operator Deliverables
Overview
After obtaining the operator deliverables and storing them based on the directory structure requirements, you can build the deliverables independently to generate a custom operator installation package (*.run). The operations are as follows:
- Compile the AI CPU operator code implementation files (.h and .cc) into libcust_aicpu_kernels.so.
- Compile the operator information library definition file (.ini) into a .json file.
- Build the operator prototype definition files (.h and .cc) into libcust_op_proto.so.
- Compile the plugin implementation files (.h and .cc) of the TensorFlow/Caffe/ONNX operator into libcust_{tf/caffe/onnx}_parsers.so.
- Build-related files, such as CMakeLists.txt, are automatically added.
Procedure
- Before performing independent building, ensure that the directory structure of the operator deliverables is as follows:
- Directory structure of the TBE operator deliverables:
├── framework // Directory of the operator plugin implementation files. Ignore this directory for a PyTorch operator. │ ├── tf_plugin // Directory of the generated operator plugin code when the source framework is TensorFlow │ └── tensorflow_conv2_d_plugin.cc // Implementation file of the operator plugin │ └── onnx_plugin // Directory of the generated operator plugin code when the source framework is ONNX │ └── conv2_d_plugin.cc // Implementation file of the operator plugin ├── op_proto // Directory of the operator prototype definition file (mandatory deliverable) │ ├── conv2_d.h │ ├── conv2_d.cc ├── tbe // Mandatory deliverable │ ├── impl // Directory of operator implementation files │ └── conv2_d.py // Operator implementation file │ ├── op_info_cfg // Directory of the operator information library files │ └── ai_core │ ├── {Soc Version} // Ascend AI Processor version │ ├── conv2_d.ini // Operator information library definition file ├── op_tiling // Directory of files related to operator tiling. Ignore this directory if operator tiling is not involved.
- Directory structure of the AI CPU operator deliverables:
├── framework // Directory of the operator plugin implementation files. Ignore this directory for a PyTorch operator. │ ├── tf_plugin // Directory of the generated operator plugin code when the source framework is TensorFlow │ └── tensorflow_conv2_d_plugin.cc // Implementation file of the operator plugin │ └── onnx_plugin // Directory of the generated operator plugin code when the source framework is ONNX │ └── conv2_d_plugin.cc // Implementation file of the operator plugin ├── op_proto // Directory of the operator prototype definition file (mandatory deliverable) │ ├── conv2_d.h │ ├── conv2_d.cc ├── cpukernel // Mandatory deliverable │ ├── impl // Directory of operator code implementation files │ │ ├── conv2_d_kernels.cc │ │ └── conv2_d_kernels.h │ ├── op_info_cfg │ │ └── aicpu_kernel │ │ └── conv2_d.ini // Operator information library definition file ├── op_tiling // Directory of files related to operator tiling. Ignore this directory if operator tiling is not involved.
- Directory structure of the TBE operator deliverables:
- Build the operator deliverables.
Go to the directory where the msopgen tool is located (python/site-packages/bin under the CANN component directory) and run the following command. Table 1 describes the parameters.
./msopgen compile -i {operator deliverables directory} -c {CANN installation paths}
Table 1 Command-line options Option
Description
Required
compile
Builds the operator deliverables.
Yes
-i,
--input
Path of the operator deliverables. The path can be either absolute or relative. The user who executes the tool must have the read permission on the path.
Yes
-c,
--cann
CANN installation directory
No
-h,
--help
Help information
No
- After the project is built successfully, an OPP named custom_opp_<target os>_<target architecture>.run is generated in the build_out directory under the path of the operator deliverables.
- Check the path of the operator deliverables and add files required for independent building.
- The following shows the directory structure of the TBE operator deliverables after building.
├── build_out // Build output directory ├── build.sh // Entry script for the build ├── cmake // New build directory │ ├── config.cmake │ ├── util ├── CMakeLists.txt // New CMakeLists.txt ├── framework // Directory of the operator plugin implementation files. Ignore this directory for a PyTorch operator. │ ├── CMakeLists.txt // New file │ ├── tf_plugin // Directory of the generated operator plugin code when the source framework is TensorFlow │ └── tensorflow_conv2_d_plugin.cc // Implementation file of the operator plugin │ └── CMakeLists.txt // New file │ └── onnx_plugin // Directory of the generated operator plugin code when the source framework is ONNX │ ├── CMakeLists.txt // New file │ └── conv2_d_plugin.cc // Implementation file of the operator plugin ├── op_proto // Directory of the operator prototype definition files and the CMakeLists file │ ├── conv2_d.h │ ├── conv2_d.cc │ ├── CMakeLists.txt // New file ├── tbe │ ├── CMakeLists.txt // New file │ ├── impl // Directory of operator implementation files │ └── conv2_d.py // Operator implementation file │ ├── op_info_cfg // Directory of the operator information library files │ └── ai_core │ ├── {Soc Version} // Ascend AI Processor version │ ├── conv2_d.ini // Operator information library definition file ├── op_tiling // Directory of files related to operator tiling. Ignore this directory if operator tiling is not involved. │ ├── CMakeLists.txt // New file ├── scripts // Directory of scripts used for custom operator packing
- The following shows the directory structure of the AI CPU operator deliverables after building.
├── build_out // Build output directory ├── build.sh // Entry script for the build ├── cmake // New build directory │ ├── config.cmake │ ├── util ├── CMakeLists.txt // New file ├── framework // Directory of the operator plugin implementation files. Ignore this directory for a PyTorch operator. │ ├── CMakeLists.txt // New file │ ├── tf_plugin // Directory of the generated operator plugin code when the source framework is TensorFlow │ └── tensorflow_conv2_d_plugin.cc // Implementation file of the operator plugin │ └── CMakeLists.txt // New file │ └── onnx_plugin // Directory of the generated operator plugin code when the source framework is ONNX │ ├── CMakeLists.txt // New file │ └── conv2_d_plugin.cc // Implementation file of the operator plugin ├── op_proto // Directory of the operator prototype definition files and the CMakeLists file │ ├── conv2_d.h │ ├── conv2_d.cc │ ├── CMakeLists.txt // New file ├── cpukernel │ ├── CMakeLists.txt // New file │ ├── impl // Directory of operator code implementation files │ │ ├── conv2_d_kernels.cc │ │ └── conv2_d_kernels.h │ ├── op_info_cfg │ │ └── aicpu_kernel │ │ └── conv2_d.ini // Operator information library definition file │ └── toolchain.cmake // New build file ├── op_tiling // Directory of files related to operator tiling. Ignore this directory if operator tiling is not involved. │ ├── CMakeLists.txt // New file ├── scripts // Directory of scripts used for custom operator packing
- The following shows the directory structure of the TBE operator deliverables after building.