Accuracy Comparison

Prerequisites

Complete Accuracy Collection to obtain the accuracy data of the NPU and GPU environments.

Comparison

  1. Copy the accuracy data dumped from the GPU environment to the NPU environment..
  2. Create the comparison configuration file compare.json. The following is an example of the file content:
    {
    "npu_path": "./npu/dump_data/step0/rank/dump.json",
    "bench_path": "./gpu/dump_data/step0/rank/dump.json",
    "stack_path": "./npu/dump_data/step0/rank/stack.json",
    "is_print_compare_log": true
    }
  3. Run the following command to export the comparison result to the result_output directory:
    msprobe -f pytorch compare -i ./compare.json -o ./compare_result/accuracy_compare -s
    If the following information is displayed, the comparison is successful:
    ...
    Compare result is /xxx/compare_result/accuracy_compare/compare_result_{timestamp}.xlsx
    ...
    The advisor summary is saved in: /xxx/compare_result/accuracy_compare/advisor_{timestamp}.txt
    ************************************************************************************
    *                        msprobe compare ends successfully.                        *
    ************************************************************************************
  4. Analyze the comparison result file.

    The compare tool generates the following files in result_output:

    • advisor_{timestamp}.txt: provides expert suggestions on APIs that may have accuracy issues.
    • compare_result_{timestamp}.xlsx: This file lists the details about all APIs for accuracy comparison and the comparison result. You can locate suspicious operators based on the color, comparison result, accuracy reached no not, and error message. However, each indicator has a determination standard. Therefore, you need to determine the result based on the actual situation.
      Example:
      Figure 1 Comparison result

    For details about the comparison result analysis, see accuracy comparison result analysis.