ST
Overview
MindStudio IDE provides an upgraded ST framework to automatically generate test cases, verify operator functionality and compute accuracy in real hardware environment, and generate an execution report. Feature details are as follows:
- Builds the operator project, deploys the operators in the built-in OPP, and runs test cases in the hardware environment to verify the operator functionality.
- Generates an operator test case definition file based on the operator information library.
- Design operator test cases based on the operator test case definition file.
- Generates an ST report (st_report.json) that displays information about test cases and phase-by-phase execution states.
- Generates a test function, compares the expected operator output and the actual operator output, and displays the comparison result to verify the compute accuracy.
Prerequisites
- Development of the following custom operator deliverables has been completed: Operator Code Implementation (TBE DSL)/Operator Code Implementation (AI CPU), Operator Prototype Definition, and Operator Information Library Definition. The ST does not test the operator plugins.
- MindStudio IDE has been connected to a hardware device.
Generating an ST Case Definition File
- An ST case file can be created in any of the following ways:
- Right-click the root directory of the operator project and choose from the shortcut menu.
- Right-click the operator information library definition file to create ST cases.
TBE operator: Right-click {project name} /tbe /op_info_cfg/ai_core/{soc version} /xx.ini and choose .
AI CPU operator: Right-click {project name} /cpukernel/op_info_cfg/aicpu_kernel/xx.ini and choose .
- If ST cases of the operator exist, right-click the testcases or testcases > st directory, and choose from the shortcut menu to add more ST cases.
- In the Create ST Cases for an Operator dialog box, select the operator for which the ST cases need to be created.
See the following figure.

- Operator Name: Select an operator name from the Operator drop-down list.
- SoC Version: Select the version of the Ascend AI Processor from the SoC Version drop-down list. If the ST is performed on the AI CPU operator, aicpu_kernel is selected by default.
- If you do not select Import operator info from a model, click OK, and a test case definition file with empty shapes is generated for the operator. You need to configure the shape information to generate test data and test cases. Configure the rest fields as required. For details, see Fields in the TBE Operator ST Case Definition File or Fields in the AI CPU ST Operator Test Case Definition File.
- If you select Import operator info from a model and upload a TensorFlow model file (.pb) that contains the operator or a model file in ONNX format, the input-layer shape of the obtained model is displayed.
You can also modify the shape information of the input-layer input in Input Nodes Shape. After you click OK, the tool automatically dumps the shape information of the selected operator based on the shape information of the input layer and generates the corresponding operator test case definition file.
This file is used for generating test data and test cases. You can modify related fields. For details, see Fields in the TBE Operator ST Case Definition File or Fields in the AI CPU ST Operator Test Case Definition File.
To use this function, you need to install a third-party framework in the operating environment. (To use this function on PyTorch, you need to install the ONNX library in the operating environment.)
- To compare the expected data with the benchmark data, define and configure a function for generating the expected operator data.
- Customize a function for generating the expected operator data.
You can customize a function for generating the expected operator data by using the Python language to generate benchmark data on the CPU. For example, you can use APIs such as NumPy to implement functions that have the same functions as custom operators. The operator accuracy is tested by comparing the benchmark data with the output data. The function for generating the expected operator data is implemented using the Python language. Multiple expected operator data generation functions can be implemented in a Python file. Keep the inputs, outputs, and attributes (including the format, type, and shape) of this function consistent with those of the custom operator.
- Edit the test case definition file.
Configure the function in the test case definition file. You can configure it in the Design view or the Text view.
- In the Design view, select a destination path of the function Python file as the script path in Expected Result Verification dialog box. In Script Function, enter the name of the function for generating the expected operator data.

In Script Function, you can choose to enter the function name or leave it blank.
- When the name of the function is entered, the function is called to generate the benchmark data during ST.
- If the function name is left empty, the function with the same name as the custom operator in the Python file under Script Path is automatically matched to generate benchmark data during ST. If no function with the same name exists, a message indicating match failure is displayed.
- In the Text view, if the calc_expect_func_file parameter is added, the value is the file path and name of the function for generating the expected operator data. For example:
"calc_expect_func_file": "/home/teste/test_*.py:function", // Configure the expected operator file.
Where, /home/teste/test_*.py indicates the implementation file of the function for generating the expected operator data, and function indicates the function name. Separate the file path and function name with a colon (:).
Example: The test_add.py file is the expected operator data generation file of the Add operator. The function implementation is as follows:
1 2 3
def calc_expect_func(input_x, input_y, out): res = input_x["value"] + input_y["value"] return [res, ]
You need to create the function for generating the expected operator data based on the developed custom operator. The name arguments of all Input, Output, and Attr in the test case definition file are used as the input parameters of this function.
- In the Design view, select a destination path of the function Python file as the script path in Expected Result Verification dialog box. In Script Function, enter the name of the function for generating the expected operator data.
- Customize a function for generating the expected operator data.
- To use the Python script to generate test cases in batches, adopt the fuzz mode.
- Implement the fuzzing script, which automatically generates the parameters other than names of the Input[xx], Output[xx], and Attr fields in the test case definition file.
The following demonstrates how to generate random shape and value arguments through an example script (fuzz_shape.py). In this example, the shape for ST is dynamic. The dimension size ranges from 1 to 4, and the value of each dimension ranges from 1 to 64.
- Import the required dependency.
import numpy as np
- Implement the fuzz_branch() method. If you customize the name of the method for randomly generating the parameters to be tested, configure the customized method in the Fuzz Function field in the operator test case definition file.
def fuzz_branch(): # Generate shape values for testing. dim = random.randint(1, 4) x_shape_0 = random.randint(1, 64) x_shape_1 = random.randint(1, 64) x_shape_2 = random.randint(1, 64) x_shape_3 = random.randint(1, 64) if dim == 1: shape = [x_shape_0] if dim == 2: shape = [x_shape_0, x_shape_1] if dim == 3: shape = [x_shape_0, x_shape_1, x_shape_2] if dim == 4: shape = [x_shape_0, x_shape_1, x_shape_2, x_shape_3] # Randomly generate x1 and x2's values according to the shape argument. fuzz_value_x1 = np.random.randint(1, 10, size=shape) fuzz_value_x2 = np.random.randint(1, 10, size=shape) # Return shape values of type dictionary to input_desc's x1 and x2 and output_desc's y. In the test case definition file, x1 and x2 are the inputs' names and y is the output's name. return {"input_desc": {"x1": {"shape": shape,"value": fuzz_value_x1}, "x2": {"shape": shape,"value": fuzz_value_x2}}, "output_desc": {"y": {"shape": shape}}}- This method is used to generate arguments except the name arguments of Input[xx], Output[xx], and Attr in the test case definition file. You can also customize the generation method for operator test as needed.
- The generated returns are of the dictionary type and assigned to the operator in this type for ST. The returned dictionary is the same as the parameter structure in the test case definition file.
- The name of the test case definition file generated by this method is in the Test_optype_001_ sub_case_001_format_type format. The values of format and type are the values in the first output of the case. If the case does not contain any output, the file name does not have an extension, that is, Test_OpType_001_ sub_case_001.
- Import the required dependency.
- Set the value of the field that needs to be randomly generated in the operator test case definition file to fuzz.
The K-Level Test Cases configuration item is displayed, including Fuzz Script Path and Fuzz Case Num.
- In Fuzz Script Path, enter the relative or absolute path of the fuzz testing script, for example, fuzz_shape.py.
- In Fuzz Case Num, enter the number of test cases generated using the fuzz test script, for example, 2000.
- (Optional) In Fuzz Function, configure a user-defined method for randomly generating parameters to be tested. If the method is not configured, fuzz_branch is used by default.
- The parameter-value pairs can only have one profile, as a result, the value of each field in the operator test case definition file must be unique.
- Implement the fuzzing script, which automatically generates the parameters other than names of the Input[xx], Output[xx], and Attr fields in the test case definition file.
- Click Save to save the modification to the operator test case definition file.
The TBE operator test case definition file (named as OpType_case_timestamp.json) is stored in the testcases/st/OpType/{SoC Version} directory under the root directory of the operator project.
The AI CPU operator test case definition file (named as OpType_case_timestamp.json) is stored in the testcases/st/OpType/aicpu_kernel} directory under the root directory of the operator project.
Strictly follow the naming rules of the operator test case definition file. Do not name irrelevant files in this format to the testcases/st/OpType/{SoC Version} or testcases/st/OpType/aicpu_kernel directory under the root directory of the operator project. Otherwise, errors will occur during file parsing.
Running ST Cases
- (Optional) Configure the path of the third-party library referenced by the AI CPU operator by modifying the cpukernel/CMakeLists.txt file in the operator project directory.
- include_directories: Add the directories of the header files to be included.
The following is an example:
include_directories( directoryPath1 directoryPath2 )
- link_directories: Add the directories of the library files to be linked with.
The following is an example:
link_directories( directoryPath3 directoryPath4 )
- link_libraries: Add the library files on which the operator build depends.
The following is an example:
link_libraries( libName1 libName2)
- include_directories: Add the directories of the header files to be included.
- Run the ST cases.Right-click the ST case definition file (TBE operator: testcases> st > OpType > {SoC Version} > xxxx.json; AI CPU operator: testcases > st > OpType > aicpu_kernel > xxxx.json) generated in Generating an ST Case Definition File and choose Run ST Case 'xxx.json'.
Table 1 Run configurations Parameter
Description
Name
Name of the run configuration (user-defined).
Test Type
Select st_cases.
Environment Variables
Operator Name
Select the operator to test.
SoC Version
Select the Ascend AI Processor type.
NOTE:If the Ascend AI Processor type is of the Ascend 310P AI Processor or Ascend 910 AI Processor series, select a specific type based on the actual requirements.
Executable File Name
Select the test case definition file to run from the drop-down list.
If the ST is performed on the AI CPU operator, (AI CPU) is displayed in front of the test case file.
Toolchain
Toolchain configurator, which preconfigures a custom toolchain with the same architecture as the installed CANN package and supports local build.
You can click Manage toolchains… to customize the toolchain. For details, see Toolchains.
Case Names
Specify the name of the case to be executed.
All cases are selected by default. You can deselect unnecessary cases.
Enable Auto Build&Deploy
Determine whether to perform build and deployment during ST running.
This function is enabled by default.
Advanced Options
Advanced options.
ATC Log Level
Select an ACT log level.
- INFO
- DEBUG
- WARNING
- ERROR
- NULL
Precision Mode
Set the precision mode. Possible values are:
- force_fp16
- allow_mix_precision
- allow_fp32_to_fp16
- must_keep_origin_dtype
Device Id
Set the ID of the device that runs the ST. Specify the ID of the AI processor in use.
Error Threshold
Customize the precision standard. The value is a list containing two elements, for example, [val1,val2].
- val1: threshold of the error between the operator output result and the benchmark data. If the actual error is greater than this threshold, the operator output result is recorded as the error data.
- val2: threshold of the ratio of error data to all data. If the actual ratio of error data to all data is less than this threshold, the precision meets the requirement.
Value range: [0.0, 1.0]
Generate Error Report
Generate reports for inconsistent comparison data.
This option is enabled by default. It generates error reports for failed ST cases, for example, the {case.name}_error_report.csv file in the testcases/st/out/OpType/run/out/test_data/data/st_error_reports directory. A single .csv file can contain a maximum of 50,000 lines of data. If the number of lines exceeds 50,000, new .csv files are generated in sequence. The file name is in the format of {case.name}_error_report0.csv.
Before using the Generate error report function, specify the custom script of the function for generating the expected operator data. For details, see 3.
Enable System Profier
Enable profiling to obtain the profile data of the operator on the Ascend AI Processor. This option is disabled by default.
To use this function, you need to set the path of the msprof tool in the operating environment to the PATH environment variable. The path of the msprof tool is toolkit/tools/profiler/bin/msprof.
For TBE operators, if this option is enabled, you can select the following AI Core performance metrics from the drop-down list:
- PipeUtilization (default)
- ArithmeticUtilization
- Memory
- MemoryL0
- MemoryUB
- ResourceConflictRatio
- Set the running user on the host.
Add a host running user in the Deployment dialog box. The user must be with the HwHiAiUser group.
- Configure environment variables for related components in the operating environment using the following methods:
- Add the environment variables in Environment Variables.
Set the environment variables in the Environment Variables area as described in 2.
- Add environment variables in the text box.
ASCEND_DRIVER_PATH=/usr/local/Ascend/driver; ASCEND_HOME=/usr/local/Ascend/ascend-toolkit/latest; ASCEND_AICPU_PATH=${ASCEND_HOME}/<target architecture>-linux; // This parameter needs to be configured only for AI CPU operators.Configure the environment variables based on the actual installation paths of the driver and CANN.
- Click
and enter information in the dialog box that is displayed.
Type the environment variable name in the Name field and the value in the Value field.
- Add environment variables in the text box.
- Add the environment variables in Environment Variables.
- Click Run.
MindStudio IDE generates test data and code in testcases/st/out/<operator name> in the root directory of the operator, builds executable files, and executes test cases on the specified hardware device. After successful running, MindStudio prints the comparison report and saves the running information.
- Print the report of comparison between the execution result and the benchmark data to the Output window.
- General information
total_count: total number of compared elements.
max_diff_thd: maximum error threshold. If the difference between the expected result and the actual result exceeds this threshold, the test case fails.
- Details
Index (sequence numbers of comparison elements), ExpectOut (expected output), RealOut (actual output), FpDiff (precision error), and RateDiff (error ratio).
- Error tolerance information and result
If the number of test cases whose ratio of the difference between the expected result and the actual result is less than DiffThd (error threshold) is higher than PctThd (accuracy threshold), the ST is passed. Otherwise, the ST fails and PctRlt (actual accuracy) is printed.
- system_profiler information and result
The system_profiler information and results are displayed in a table only after Enable System Profiler is enabled in 2 and collection items are configured. For details about the parameters, see the Profiling Instructions.
- General information
- Generate the st_report.json file in testcases/st/out/<operator name> in the root directory of the operator. For details about the st_report.json file, see Table 2.
Table 2 st_report.json description Field
Description
run_cmd
-
-
Command.
report_list
-
-
List of reports of test cases.
trace_detail
-
Execution details.
st_case_info
Test information, including the following:
- expect_data_path: path of the expected result
- case_name: test case name
- input_data_path: path of input data
- planned_output_data_paths: path of the actual result
- op_params: operator parameter information
stage_result
Result information in each runtime phase, including the following:
- status: running status of the phase, indicating success or failure
- result: output result
- stage_name: phase name
- cmd: command
case_name
-
Test name.
status
-
Actual test result, either success or failure.
expect
-
Expected test result, either success or failure.
summary
-
-
Summary of comparison of the actual test result and the expected test result.
test case count
-
Number of test cases.
success count
-
Number of test cases whose result is the same as the expected result.
failed count
-
Number of test cases whose result differs from the expected result.
- Print the report of comparison between the execution result and the benchmark data to the Output window.
