ST
Overview
MindStudio provides an upgraded ST framework to automatically generate test cases, verify operator functionality and compute accuracy in real hardware environment, and generate an execution report. Feature details are as follows:
- Generates an operator test case definition file based on the operator information library.
- Generates test data of different shapes and dtypes and AscendCL-based test cases from the operator test case definition file.
- Builds the operator project, deploys the operators in the built-in OPP, and runs test cases in the hardware environment to verify the operator functionality.
- Generates an ST report (st_report.json) that displays information about test cases and phase-by-phase execution states.
- Generates a test function, compares the expected operator output and the actual operator output, and displays the comparison result to verify the compute accuracy.
Prerequisites
- Development of the following custom operator deliverables has been completed: Operator Code Implementation, Operator Prototype Definition, and Operator Information Library Definition. The ST does not test the operator plugins.
- MindStudio has been connected to a hardware device.
Generating an ST Case Definition File
- Create ST cases.The following three methods are available:
- Right-click the root directory of the operator project and choose from the shortcut menu.
- Right-click the operator information library definition file {project name} /cpukernel/op_info_cfg/aicpu_kernel/xx.ini, and choose .
- If ST cases of the operator exist, right-click the testcases or testcases > st directory, and choose from the shortcut menu to add ST cases.
- In the Create ST Cases for an Operator dialog box, select the operator for which the ST case needs to be created. See the following figure.

- Select an operator name from the Operator drop-down list box.
- Select the version of the Ascend AI Processor from the SoC Version drop-down list. If the ST is performed on the AI CPU operator, aicpu_kernel is selected by default.
- If Import operator info from a model is not selected, click OK and an operator test case definition file with empty shapes is generated. You need to configure the shape information to generate test data and test cases. Configure the rest fields as required. For details, see Fields in the AI CPU ST Operator Test Case Definition File.
- If you select Import operator info from a model and upload a TensorFlow model file (.pb) that contains the operator or a model file in ONNX format, the top-layer shape of the obtained model is displayed.
You can also modify the shape information of the input-layer input in Input Nodes Shape. After you click OK, the tool automatically dumps the shape information of the selected operator based on the shape information of the input layer and generates the corresponding operator test case definition file.
This file is used for generating test data and test cases. You can modify related fields. For details, see Fields in the AI CPU ST Operator Test Case Definition File.
To use this function, you need to install the TensorFlow framework and ONNX library in the operating environment. If the Windows OS is used, you need to install the TensorFlow framework and ONNX library on the local Windows host.
- To compare the expected data with the benchmark data, define and configure a function for generating the expected data of the operator.
- Customize a test function for generating expected operator result.
You can customize a test function for generating expected operator result by using the Python language to generate benchmark data on the CPU. For example, you can use APIs such as NumPy to implement functions that have the same functions as custom operators. The operator accuracy is tested by comparing the benchmark data with the output data. The function for generating expected data is implemented using the Python language. Multiple expected operator data generation functions can be implemented in a Python file. Keep the inputs, outputs, and attributes (including the format, type, and shape) of this function consistent with those of the custom operator.
- Edit the test case definition file.
Configure the function in the test case definition file. You can configure it on the Design view or the Text view.
- On the Design view, select a destination path of the Python file as the script path in Expected Result Verification dialog box. In Script Function, enter the name of the function that generates expected operator data.

In Script Function, you can choose to enter the function or leave it blank.
- When the name of the function is entered, the function is called to generate the benchmark data during ST.
- If function name is left empty, the function with the same name as the custom operator is automatically matched to generate benchmark data during ST. If no function with the same name exists, a message indicating match failure is displayed.
- In the Text view, if the calc_expect_func_file parameter is added, the value is the file path and name of the function generating the expected operator data. For example:
"calc_expect_func_file": "/home/teste/test_*.py:function", // Configure the expected operator file.
Where, /home/teste/test_*.py indicates the implementation file of the test function, and function indicates the function name. Separate the file path and function name with a colon (:).
Example: The test_add.py file is the expected data generation file of the Add operator. The function implementation is as follows:
def calc_expect_func(input_x, input_y, out): res = input_x["value"] + input_y["value"] return [res, ]
You need to create the expected data generation function of the operator based on the developed custom operator. The name arguments of all Input, Output, and Attr in the test case definition file are used as the input parameters of the expected data generation function of the operator.
- On the Design view, select a destination path of the Python file as the script path in Expected Result Verification dialog box. In Script Function, enter the name of the function that generates expected operator data.
- Customize a test function for generating expected operator result.
- To use the Python script to generate test cases in batches, adopt the fuzz mode.
- Implement the fuzzing script, which automatically generates the parameters other than names of the Input[xx], Output[xx], and Attr fields in the test case definition file.
The following demonstrates how to generate random shape and value arguments through an example script (fuzz_shape.py). In this example, the shape for ST is dynamic. The dimension size ranges from 1 to 4, and the value of each dimension ranges from 1 to 64.
- Import the required dependency.
import numpy as np
- Implement the fuzz_branch() method. If you customize the name of the method for randomly generating the parameters to be tested, configure the customized method in the Fuzz Function field in the operator test case definition file.
def fuzz_branch(): # Generate shape values for testing. dim = random.randint(1, 4) x_shape_0 = random.randint(1, 64) x_shape_1 = random.randint(1, 64) x_shape_2 = random.randint(1, 64) x_shape_3 = random.randint(1, 64) if dim == 1: shape = [x_shape_0] if dim == 2: shape = [x_shape_0, x_shape_1] if dim == 3: shape = [x_shape_0, x_shape_1, x_shape_2] if dim == 4: shape = [x_shape_0, x_shape_1, x_shape_2, x_shape_3] # Randomly generated x1 and x2's values according to the shape argument. fuzz_value_x1 = np.random.randint(1, 10, size=shape) fuzz_value_x2 = np.random.randint(1, 10, size=shape) # Return shape values of type dictionary to input_desc's x1 and x2 and output_desc's y. In the test case definition file, x1 and x2 are the inputs' names and y is the output's name. return {"input_desc": {"x1": {"shape": shape,"value": fuzz_value_x1}, "x2": {"shape": shape,"value": fuzz_value_x2}}, "output_desc": {"y": {"shape": shape}}}- This method is used to generate arguments except the name arguments of Input[xx], Output[xx], and Attr in the test case definition file. You can also customize the generation method for operator test as needed.
- The generated returns are of the dictionary type and assigned to the operator in this type for ST. The returned dictionary is the same as the parameter structure in the test case definition file.
- The name of the test case definition file generated by this method is in the Test_optype_001_ sub_case_001_format_type format. The values of format and type are the values in the first output of the case. If the case does not contain any output, the file name does not have an extension, that is, Test_OpType_001_ sub_case_001.
- Import the required dependency.
- Set the value of the field that needs to be randomly generated in the operator test case definition file to fuzz.
The K-Level Test Cases configuration item is displayed, including Fuzz Script Path and Fuzz Case Num.
- In Fuzz Script Path, enter the relative or absolute path of the fuzz testing script, for example, fuzz_shape.py.
- In Fuzz Case Num, enter the number of test cases generated using the fuzzing test script, for example, 2000.
- (Optional) In Fuzz Function, configure a user-defined method for randomly generating parameters to be tested. If the method is not configured, fuzz_branch is used by default.
- The parameter-value pairs can only have one profile, as a result, the value of each field in the operator test case definition file must be unique.
- Implement the fuzzing script, which automatically generates the parameters other than names of the Input[xx], Output[xx], and Attr fields in the test case definition file.
- Click Save to save the modification to the operator test case definition file.
The operator test case definition file (named as OpType_case_timestamp.json) is stored in the testcases/st/OpType/aicpu_kernel} directory under the root directory of the operator project.
Strictly follow the naming rules of the operator test case definition file. Do not name irrelevant files in this format to the testcases/st/OpType/aicpu_kernel directory under the root directory of the operator project. Otherwise, file parsing errors may occur.
Running ST Cases
- (Optional) Modify the cpukernel/CMakeLists.txt file in the project directory.
- include_directories: Add the directories of the header files to be included.
Example:
include_directories( directoryPath1 directoryPath2 )
- link_directories: Add the directories of the library files to be linked with.
Example:
link_directories( directoryPath3 directoryPath4 )
- link_libraries: Add the library files on which the operator build depends.
Example:
link_libraries( libName1 libName2)
- include_directories: Add the directories of the header files to be included.
- Run the ST cases.Right-click the ST case definition file generated in Generating an ST Case Definition File (testcases > st > OpType > aicpu_kernel > xxxx.json) and choose Run ST Case 'xxx' from the shortcut menu.
Table 1 Run configuration Parameter
Description
Name
Name of the run configuration (user-defined).
Test Type
Select st_cases.
Execute Mode
- Remote Execute
- Local Execute
NOTE:
Local Execute does not apply to Windows OSs.
Deployment
When Execute Mode is set to Remote Execute, you can use the Deployment function to synchronize the files and folders in a specified project to a specified directory on a remote device. For details, see Deployment.
CANN Machine
Set the deployment information of the device where the CANN tool is located.
NOTE:This parameter applies only to Windows OSs.
Environment Variables
- Add environment variables in the text box.
Use semicolons (;) to separate multiple environment variables.
- You can also click the icon next to the text box and enter a value in the displayed dialog box.
- Type PATH_1 in the Name field.
- Type the value (Path 1) in the Value field.
If you select Instead system environment variables, the system environment variables are displayed.
Operator Name
Select the operator to test.
SoC Version
Configuration type of the Ascend AI Processor.
Executable File Name
Select the test case definition file to run from the drop-down list box.
If the ST is performed on the AI CPU operator, (AI CPU) is displayed in front of the test case file.
Toolchain
Toolchain configurator, which preconfigures a custom toolchain with the same architecture as the installed CANN package and supports local and remote build.
You can click Manage toolchains…… to customize the toolchain. For details, see Toolchains.
Case Names
Specify the name of the case to be executed.
NOTE:All cases are selected by default. You can deselect unnecessary cases.
Enable Auto Build&Deploy
Determine whether to perform build and deployment during ST running.
Advanced Options
Specify advanced options.
ATC Log Level
Select an ATC log level.
- INFO
- DEBUG
- WARNING
- ERROR
- NULL
Precision Mode
Set the precision mode. Possible values are:
- force_fp16
- allow_mix_precision
- allow_fp32_to_fp16
- must_keep_origin_dtype
Device Id
Set the ID of the device that runs the ST. Specify the ID of the AI processor in use.
Error Threshold
Customize the precision standard. The value is a list containing two elements, for example, [val1,val2].
- val1: threshold of the error between the operator output result and the benchmark data. If the actual error is greater than this threshold, the operator output result is recorded as the error data.
- val2: threshold of the ratio of error data to all data. If the actual ratio of error data to all data is less than this threshold, the precision meets the requirement.
Value range: [0.0, 1.0]
Generate Error Report
Generate reports for inconsistent comparison data.
This option is enabled by default. It generates error reports for failed ST cases, for example, the {case.name}_error_report.csv file. Before using the Generate error report function, specify the custom script of the function for generating the expected operator data. For details, see 3.
- The ST supports the setting and query of the board log level. For details, see Log Management.
- Windows OSs do not support the Local Execute function.
- Set the running user on the host.
Add a host running user in the Deployment dialog box. The user must be with the HwHiAiUser group. For details about how to configure the Deployment function, see Deployment.
- Configure the environment variables of related components in the operating environment.
- On a remote device:
For Ascend EP, you need to configure the environment variables of the component installation paths on the host of the device.
Configure the installation paths of the Runtime, Compiler, and Driver components in the ~/.bashrc file as the host running user. If it is not configured, perform the following operations:- Add the following information to the ~/.bashrc file:
# Ascend-CANN-Toolkit environment variable. Change it to the actual path. source $HOME/Ascend/ascend-toolkit/set_env.sh - Make the environment variables take effect.
source ~/.bashrc
- Add the following information to the ~/.bashrc file:
- Add the environment variable in Environment Variables.
Set the environment variables in the Environment Variables area as described in Running ST Cases.
- Add environment variables in the text box.
ASCEND_DRIVER_PATH=/usr/local/Ascend/driver; ASCEND_HOME=/usr/local/Ascend/ascend-toolkit/latest; ASCEND_AICPU_PATH=${ASCEND_HOME}/<target architecture>-linux; If the remote device is in an inference environment: LD_LIBRARY_PATH=${ASCEND_DRIVER_PATH}/lib64:${ASCEND_HOME}/lib64:$LD_LIBRARY_PATH; If the remote device is in a training environment: LD_LIBRARY_PATH=${ASCEND_DRIVER_PATH}/lib64/driver:${ASCEND_DRIVER_PATH}/lib64/common:${ASCEND_HOME}/lib64:$LD_LIBRARY_PATH;Modify the environment variables based on the actual installation paths of the driver and CANN and the architecture used by the remote OS.
- You can also click
and enter information in the dialog box that is displayed.
Type the environment variable name in the Name field and the value in the Value field.
- Add environment variables in the text box.
- On a remote device:
- Click Run.
MindStudio generates test data and code in /testcases/st/out/<operator name> in the root directory of the operator, builds executable files, and executes test cases on the specified hardware device. After successful running, MindStudio prints the comparison report and saves the running information.
- Print the report of comparison between the execution result and the benchmark data to the Output window.
- General information
total_count: total number of compared elements.
max_diff_thd: maximum error threshold. If the difference between the expected result and the actual result exceeds this threshold, the test case fails.
- Details
Index (sequence numbers of comparison elements), ExpectOut (expected output), RealOut (actual output), FpDiff (precision error), and RateDiff (error ratio).
- Error tolerance information and result
If the number of test cases whose ratio of the difference between the expected result and the actual result is less than DiffThd (error threshold) is higher than PctThd (accuracy threshold), the ST is passed. Otherwise, the ST fails and PctRlt (actual accuracy) is printed.
- General information
- Generate the st_report.json file in /testcases/st/out/<operator name> in the root directory of the operator. For details about the st_report.json file, see Table 2.
Table 2 st_report.json description Field
Description
run_cmd
-
-
Command.
report_list
-
-
List of reports of test cases.
trace_detail
-
Execution details.
st_case_info
Test information, including the following:
- expect_data_path: path of the expected result
- case_name: test case name
- input_data_path: path of input data
- planned_output_data_paths: path of the actual result
- op_params: operator parameter information
stage_result
Result information in each runtime phase, including the following:
- status: running status of the phase, indicating success or failure
- result: output result
- stage_name: phase name
- cmd: command
case_name
-
Test name.
status
-
Actual test result, either success or failure.
expect
-
Expected test result, either success or failure.
summary
-
-
Summary of comparison of the actual test result and the expected test result.
test case count
-
Number of test cases.
success count
-
Number of test cases whose result is the same as the expected result.
failed count
-
Number of test cases whose result differs from the expected result.
- Print the report of comparison between the execution result and the benchmark data to the Output window.