Analysis Sample of Network Model Selection
Background
The following uses the Ghostnet network model as an example. The Ghostnet network model is claimed to be a lightweight model that has higher inference efficiency than a conventional convolutional network model. We use the Profiling tool to analyze the inference duration of the network model. The analysis result shows that the inference duration is high. By analyzing the convolution operation of the Ghostnet network model, it is found that its Conv operation is split and integrated for multiple times, which severely affects the execution efficiency. This makes the model's inference efficiency not as high as that of a common network model. See Figure 1.
The conclusion is that Ghostnet is not a preferred network model.
Profiling Operations
- Start MindStudio IDE and open a built project.
- On the menu bar, choose Ascend > System Profiler > New Project. The profiling configuration window is displayed.
- In the window shown in Figure 2, set Project Name and Project Location. Click Next.
- Access the Executable Properties page. Set the path for storing the executable file of the profiling project. See Figure 3.
- Access the Profiling Options page and select Task-based. See Figure 4.
- After the preceding configurations are complete, click Start in the lower right corner of the window to start Profiling.
The profiling results will be automatically displayed at the bottom of the MindStudio IDE window after the execution is complete. Click the Statistics view in the data pane at the lower part of the Timeline view, you can see that the aclmdlExecute API reflects the overall network execution duration, which reaches 66352.469 μs. Click the AI Core Metrics view to check the Ghostnet network model. It is found that a large number of Concat operations are performed, which take a long time. See Figure 5 and Figure 6.
Fault Analysis
As shown in Figure 6, the Ghostnet network model performs a large number of Concat operations, resulting in a long overall execution duration.
Troubleshooting
After using a common network model to re-execute Profiling Operations, we obtain a new result. In the Statistics view, the overall network execution duration reflected by the aclmdlExecute API is only 14202.312 μs. See Figure 7.
Conclusion
After using the Profiling tool to analyze and compare the two execution durations of network model inference, it is found that the inference efficiency of the Ghostnet network model is not higher than that of the conventional convolutional network model. Therefore, the Ghostnet network model is not a preferred model for inference.





