Using the msopgen Tool
Overview
The CANN Toolkit provides the msopgen tool for generating operator deliverables for your operator project based on the operator implementation file, operator plugin, operator prototype definition file, operator information library definition file, and operator project build configuration file.
To develop more than one AI CPU operator, implement them in the same operator project and build their implementation files into one dynamic library file.
Tool Path
Generates the custom operator project by using the executable file msopgen whose function and installation path are as follows.
Name |
Description |
Path |
|---|---|---|
msopgen |
Generates the custom operator project. |
python/site-packages/bin under the CANN component directory. |
Prerequisites
The CANN portfolio provides a process-level environment variable setting script to automatically set environment variables. The following commands are used as examples, in which the default installation paths are under the root or non-root user. Replace them with actual installation paths.
# Install Toolkit as the root user. . /usr/local/Ascend/ascend-toolkit/set_env.sh # Install Toolkit as a non-root user. . ${HOME}/Ascend/ascend-toolkit/set_env.sh- Ensure that the dependency has been installed.
Procedure
- Confirm the required input file.The msopgen tool allows creating an operator project based on the following types of operator prototype definition files:
- IR definition file (.json) of the operator adapted to Ascend AI Processor
- TensorFlow operator prototype definition file (.txt)
The TensorFlow prototype definition file can be used to generate TensorFlow and Caffe and PyTorch operator projects.
- IR definition file (.xlsx) of the operator adapted to Ascend AI Processor
- Choose a file type that best suits your situation.
- Working with IR definition file (.json) of the operator adapted to Ascend AI ProcessorFind the IR_json.json template file in python/site-packages/op_gen/json_template under the CANN component directory and tweak it as needed by referring to Table 2.
Table 2 JSON file field description Field
Type
Description
Required
op
-
String
Operator type.
Yes
input_desc
-
List
Input description.
No
name
String
Input name.
param_type
String
Parameter classification.
- required
- optional
- dynamic
Defaults to required.
format
List
Supported formats for parameters of type tensor. For details, see Format.
Selected from:
ND, NHWC, NCHW, HWCN, NC1HWC0, FRACTAL_Z, and more
The quantities of format and type items must be the same.
type
List
Parameter types.
Value range: float16 (fp16), float32 (fp32), int8, int16, int32, uint8, uint16, bfloat16 (bf16), bool, and so on.
NOTE:Different computation operations support different data types. For details, see API Reference.
The quantities of format and type items must be the same.
output_desc
-
List
Output description.
Yes
name
String
Output name.
param_type
String
Parameter classification.
- required
- optional
- dynamic
Defaults to required.
format
List
Supported formats for parameters of type tensor. For details, see Format.
Selected from:
ND, NHWC, NCHW, HWCN, NC1HWC0, FRACTAL_Z, and more
The quantities of format and type items must be the same.
type
List
Parameter types.
Value range: float16 (fp16), float32 (fp32), int8, int16, int32, uint8, uint16, bfloat16 (bf16), bool, and so on.
NOTE:Different computation operations support different data types. For details, see API Reference.
The quantities of format and type items must be the same.
attr
-
List
Attribute description.
No
name
String
Attribute name.
param_type
String
Parameter classification.
- required
- optional
Defaults to required.
type
String
Parameter types.
Selected from:
int, bool, float, string, list_int, list_float, and more
default_value
-
Default value.
- Multiple operators can be configured in a JSON file, which contains a list, with each element representing an operator.
- If input_desc of an operator has identical name fields, the name field that comes last on the list takes precedence. This rule also applies to output_desc.
- The type and format fields in input_desc and output_desc must match each other.
For example, the type of the first input x is set to [int8, int32], the type of the second input y is set to [fp16, fp32], and the type of the output z is set to [int32, int64]. The operator supports the inputs int8 and fp16 to generate int32 or the inputs int32 and fp32 to generate int64. That is, the type fields of inputs are vertically corresponding to the type field of the output, and cannot overlap.
- The type and format fields in input_desc and output_desc must match each other, and must have the same quantity. If the type value is numbertype, realnumbertype, quantizedtype, BasicType, IndexNumberType, or all, check whether the quantities of type and format items are the same. If not, an error message is displayed during project creation. In addition, format items are supplemented based on the quantity of type items to continue to create an operator project. If the value of type is int32 and the type and format items cannot match, an error message is displayed during project creation, which interrupts project running.
- Working with TensorFlow prototype definition file (.txt)
The content of TensorFlow prototype definition (.txt) can be obtained from the TensorFlow open-source community. The following uses the prototype definition of operator Add (in /tensorflow/core/ops/math_ops.cc) as an example.
REGISTER_OP("Add") .Input("x: T") .Input("y: T") .Output("z: T") .Attr( "T: {bfloat16, half, float, double, uint8, int8, int16, int32, int64, " "complex64, complex128, string}") .SetShapeFn(shape_inference::BroadcastBinaryOpShapeFn);Save the preceding code to a .txt file.
A single .txt file can contain the prototype definition of only one operator.
The msopgen tool parses only the information such as operator type, inputs, outputs, and attributes. You may remove irrelevant code from the .txt file.
- Working with Excel IR definition file of the operator adapted to Ascend AI Processor
Find the Ascend_IR_Template.xlsx template file in toolkit/tools/msopgen/template in the CANN software installation directory. Tweak the "Op" sheet as needed. It is allowed to define more than one operator on the "Op" sheet. The following parameters are required for defining an operator.
Table 3 Parameters in IR prototype definition Column Header
Description
Required
Op
Operator type.
Yes
Classify
Parameter classification.- Input
- Dynamic input: DYNAMIC_INPUT
- Output
- Dynamic output: DYNAMIC_OUTPUT
- Attribute: Attr
Yes
Name
Parameter name.
Yes
Type
Parameter types.
Selected from:
tensor, int, bool, float, ListInt, ListFloat, and more
Yes
TypeRange
Supported data types for parameters of type tensor.
Selected from:
fp16, fp32, double, int8, int16, int32, int64, uint8, uint16, uint32, uint64, bf16, bool, and more
(In MindSpore) Selected from the following equivalents:
I8_Default, I16_Default, I32_Default, I64_Default, U8_Default, U16_Default, U32_Default, U64_Default, BOOL_Default, and more
No
Required
Whether an input is required:
- TRUE
- FALSE
Yes
Doc
Parameter description.
No
Attr_Default_value
Default value of an attribute.
No
Format
Supported formats for parameters of type tensor.
Selected from:
ND, NHWC, NCHW, HWCN, NC1HWC0, FRACTAL_Z, and more
No
Group
Operator category.
No
The following is a configuration example.
Table 4 Examples of IR prototype definition This Row Is Reserved.
Op
Classify
Name
Type
TypeRange
Required
Doc
Attr_Default_value
Format
Reshape
INPUT
x
tensor
fp16, fp32, double, int8, int16, int32, int64, uint8, uint16, uint32, uint64, bf16, bool
TRUE
-
-
ND
INPUT
shape
tensor
int32,int64
FALSE
-
-
-
DYNAMIC_OUTPUT
y
tensor
fp16, fp32, double, int8, int16, int32, int64, uint8, uint16, uint32, uint64, bf16, bool
FALSE
-
-
ND
ATTR
axis
int
-
FALSE
-
0
-
ATTR
num_axes
int
-
FALSE
-
-1
-
ReshapeD
INPUT
x
tensor
fp16, fp32, double, int8, int16, int32, int64, uint8, uint16, uint32, uint64, bf16, bool
TRUE
-
-
ND
OUTPUT
y
tensor
fp16, fp32, double, int8, int16, int32, int64, uint8, uint16, uint32, uint64, bf16, bool
TRUE
-
-
ND
ATTR
shape
list_int
-
FALSE
-
{}
-
ATTR
axis
int
-
FALSE
-
0
-
ATTR
num_axes
int
-
FALSE
-
-1
-
- Tweak the "Op" sheet as needed.
- Do not delete the first three rows and columns on the "Op" sheet.
- Working with IR definition file (.json) of the operator adapted to Ascend AI Processor
- Create an operator project.
Go to the directory where the msopgen tool is located and run the following command. Table 5 describes the command-line arguments.
./msopgen gen -i {operator define file} -f {framework type} -c {Compute Resource} -out {Output Path}
Table 5 Command-line options Option
Description
Required
gen
Generates operator deliverables.
Yes
-i,
--input
Sets the path of the operator definition file. The path can be either absolute or relative. The user who executes the tool must have the read permission on the path.
The following types of operator definition files are supported:
- IR definition file (.json) of the operator adapted to Ascend AI Processor
- TensorFlow operator prototype definition file (.txt)
- IR definition file (.xlsx) of the operator adapted to Ascend AI Processor
Yes
-f
--framework
Specifies the framework type.
- tf or tensorflow: TensorFlow
- caffe: Caffe
- pytorch: PyTorch
- ms or mindspore: MindSpore
- onnx: ONNX
NOTE:The arguments are case-insensitive.
No
-c,
--compute_unit
Specifies the compute resource used by the operator.
- For a TBE operator, set this option in the format of ai_core-{SoC Version}.
Set {Soc Version} based on the actual version of the Ascend AI Processor.
NOTE:To obtain the information about {Soc Version}, do the following:
- Run the npu-smi info command on the server where the Ascend AI Processor is installed to obtain the Chip Name information. The actual value is AscendChip Name. For example, if Chip Name is xxxyy, the actual value is Ascendxxxyy.
Basic functions (operator development, compilation, and deployment based on the project) are applicable across operator projects created based on the same AI processor series.
- For an AI CPU operator, set to aicpu.
Yes
-out,
--output
Sets the output path. The path can be either absolute or relative. The user who runs the tool must have the read and write permissions on the path.
If this option is not specified, the outputs are generated to the current path where the command is executed.
No
-m,
--mode
Sets the deliverable generation mode.
- 0: generates the deliverables to a new operator project. If an operator project exists in the specified path, an error is reported and the tool exits.
- 1: generates the deliverables to an existing operator project.
Defaults to 0.
No
-op,
--operator
Applies to the scenario where -i is set to an operator IR definition file.
Sets the operator type, for example, Conv2DTik.
If this option is not set, the tool prompts you to select an operator when there are multiple operators in the IR definition file.
No
-lan,
--language
Operator coding language.
- py: Use Python for operator development based on the DSL and TIK frameworks.
- cpp: Use C/C++ for operator development based on the Ascend C framework.
Defaults to py.
No
Example:
Use the IR_json.json template as the input to create an operator project whose original framework is TensorFlow.
- Go to the directory where the msopgen tool is located and create an operator project.Run the following command for a TBE operator:
./msopgen gen -i json_path/IR_json.json -f tf -c ai_core-{Soc Version} -out ./output_dataRun the following command for an AI CPU operator:./msopgen gen -i json_path/IR_json.json -f tf -c aicpu -out ./output_data
- Set -i to the actual path of the IR_json.json file, for example, ${INSTALL_DIR}/python/site-packages/op_gen/json_template/IR_json.json.
- Set {Soc Version} in the -c parameter of the TBE operator project to the version of the Ascend AI Processor.
- (Optional) Select an operator.
- If the input IR_json.json file contains only one operator prototype definition or the operator type is specified by using the -op option, skip this step.
- If the input IR_json.json file contains more than one prototype definition and the -op option is not used, the tool prompts you to select an operator by entering the operator sequence number.
When the tool prompts you to enter the sequence number of an operator, enter 1.
There is more than one operator in the .json file: 1 Op_1 2 Op_2 Input the number of the op: 1
When the message "Generation completed" is displayed, the Op_1 operator project is created. Op_1 is the op value in the file.
- View the operator project directory.
- The TBE operator project directory is generated in the ./output_data directory specified by -out. The directory is organized as follows:
├── build.sh // Entry to the build script ├── cmake │ ├── config.cmake │ ├── util // Directory of build scripts of operator project and common build files ├── CMakeLists.txt // Build script of the operator project ├── framework // Directory of the operator plugin implementation files. Ignore this directory for a PyTorch operator. │ ├── CMakeLists.txt │ ├── tf_plugin // Directory of the generated operator plugin code when the source framework is TensorFlow │ └── tensorflow_conv2_d_plugin.cc // Implementation file of the operator plugin │ └── CMakeLists.txt │ └── onnx_plugin // Directory of the generated operator plugin code when the source framework is ONNX │ ├── CMakeLists.txt │ └── conv2_d_plugin.cc // Implementation file of the operator plugin ├── op_proto // Directory of the operator prototype definition files and the CMakeLists file │ ├── conv2_d.h │ ├── conv2_d.cc │ ├── CMakeLists.txt ├── tbe │ ├── CMakeLists.txt │ ├── impl // Directory of operator implementation files │ └── conv2_d.py // Operator implementation file │ ├── op_info_cfg // Directory of the operator information library files │ └── ai_core │ ├── {Soc Version} // Ascend AI Processor version │ ├── conv2_d.ini // Operator information library definition file ├── op_tiling // Directory of files related to operator tiling. Ignore this directory if operator tiling is not involved. │ ├── CMakeLists.txt ├── scripts // Directory of scripts used for custom operator project packing
- The AI CPU operator project directory is generated in the ./output_data directory specified by -out. The directory is organized as follows:
├── build.sh // Entry to the build script ├── cmake │ ├── config.cmake │ ├── util // Directory of build scripts of operator project and common build files ├── CMakeLists.txt // Build script of the operator project ├── cpukernel │ ├── CMakeLists.txt │ ├── impl // Directory of operator implementation files │ │ ├── conv2_d_kernels.cc │ │ └── conv2_d_kernels.h │ ├── op_info_cfg │ │ └── aicpu_kernel │ │ └── conv2_d.ini // Operator information library definition file │ └── toolchain.cmake ├── framework // Directory of the operator plugin implementation files. Ignore this directory for a PyTorch operator. │ ├── CMakeLists.txt │ ├── tf_plugin // Directory of the generated operator plugin code when the source framework is TensorFlow │ └── tensorflow_conv2_d_plugin.cc // Implementation file of the operator plugin │ └── CMakeLists.txt │ └── onnx_plugin // Directory of the generated operator plugin code when the source framework is ONNX │ ├── CMakeLists.txt │ └── conv2_d_plugin.cc // Implementation file of the operator plugin ├── op_proto // Directory of the operator prototype definition files and the CMakeLists file │ ├── conv2_d.h │ ├── conv2_d.cc │ ├── CMakeLists.txt ├── op_tiling // Directory of files related to operator tiling. Ignore this directory if operator tiling is not involved. │ ├── CMakeLists.txt ├── scripts // Directory of scripts used for custom operator project packing
- The TBE operator project directory is generated in the ./output_data directory specified by -out. The directory is organized as follows:
- Optional: Add an operator to an existing operator project.
To add more custom operator to an existing operator project, include the -m 1 option on the command line.
Go to the directory where the msopgen tool is located and run the following command:
Example for TBE operators:
./msopgen gen -i json_path/**.json -f tf -c ai_core-{Soc Version} -out ./output_data -m 1Example command for AI CPU operators:
./msopgen gen -i json_path/**.json -f tf -c aicpu -out ./output_data -m 1
- Set -i to the actual path of the IR_json.json file, for example, ${INSTALL_DIR}/python/site-packages/op_gen/json_template/IR_json.json.
- Set {Soc Version} in the -c parameter of the TBE operator project to the version of Ascend AI Processor.
The operator is added to the **.json file in the operator project directory. You cannot add operators of frameworks other than MindSpore or MindSpore TBE operators to a MindSpore AI CPU operator project.
Supplementary Notes
For details about other parameters of the msopgen tool, see Table 6.