Timeline View

Timeline View consists of the navigation pane on the left, graphical pane on the right, and data pane at the bottom, as shown in the following figure.
Figure 1 Timeline View
Figure 2 Timeline View (PyTorch E2E Profiling scenario)
  • On the navigation pane, you can view each timeline's label and their dependencies.
  • On the graphical pane, events are displayed in timelines.
  • The data pane displays the collected profile data in tables, including Event View, Statistics, and AI Core Metrics.
  • The timeline (not timestamps) involved in the profile data indicates the system monotonic time, which is related only to the system and is not the real time.
  • Start Time and End Time indicate the time range of the collected profile data.
  • Current Time indicates the start time of the time segment where the cursor is located.
  • Move the cursor to a sampling point to view the detailed analysis data.
  • To see the sequence of a timeline in the Event View window, right-click a timeline label in the left navigation pane and choose Show in Event View from the shortcut menu.
  • In TimeLine View, you can check APIs and operations used.
  • If multiple OS Runtime APIs within a thread are executed simultaneously, they are displayed in different lines.
  • If multiple AI Core tasks within a stream are executed simultaneously, they are displayed in different lines.
  • To zoom in or out the timeline of interest, select a time point, and hold down Ctrl as scrolling the mouse wheel upwards or downwards. You can also click or in the upper right corner of the window. To restore the timeline view, click in the upper right corner of the window.
  • When selecting a time point in a timeline, you can drag the cursor leftward or rightward to select the duration. With Current Time(us) as the boundary point, drag the mouse leftwards or rightwards to display the selected time range.
  • In cluster or multi-device scenarios, profile data of the first iteration of the model (ID) with the largest number of iterations on the rank or device with the minimum ID is exported by default.
  • Under the PyTorch framework, profile data of the first iteration on the rank or device with the minimum ID is exported by default, as shown in Figure 2.

After the profiling project is executed, the inference/training process is displayed in a timeline view based on the scheduling procedure. The actual display depends on the options selected during profiling and the device in use.

The following table describes the fields in the display sequence.

Table 1 Hardware information

Field

Description

CPU

CPU

Memory

Memory

Disk

Disk

Network

Network bandwidth

Start Time

Process start time of the CPU, memory, disk, and network bandwidth, in μs.

End Time

Process end time of the CPU, memory, disk, and network bandwidth, in μs.

Duration

Running duration of the CPU, memory, disk, and network bandwidth, in μs.

Usage

Usage of the CPU, memory, disk, and network bandwidth.

Note: When you move the cursor over a timeline, Start Time, End Time, Duration, and Usage are displayed.

Table 2 Time spent by components

Field

Description

Process {ID}

Process ID

Thread {ID}

Thread ID

MsprofTX

MsprofTX profile data

Os Runtime

Timeline of each thread to call OS Runtime APIs.

PTA

Timeline of the task queue asynchronously delivered by operators at the PyTorch layer.

AscendCL API

Time taken by models, operators, and Runtime APIs. If a certain metric is not collected, the profiling result will be not available.

ACL_RTS

AscendCL API of the RTS type.

ACL_MODEL

AscendCL API of the MODEL type.

Runtime API

Timeline of each thread to call Runtime APIs.

GE

Time taken by the model to input, infer, and output data.

Start Time

Time when an API starts to run, in μs.

End Time

Time when an API stops running, in μs.

Duration

Time taken by the calls to the current API, in μs.

Name

API name.

Note: When you move the cursor over a timeline, Start Time, End Time, Duration, and Name are displayed.

Table 3 System data of the Ascend AI Processor

Field

Description

NPU {ID}

ID of the Ascend AI Processor.

Step Trace

Iteration trace data, which records the time required for each iteration.

model {ID}

Model IDs, which are displayed in sequence under Step Trace.

You can use either of the following methods to export and display data of an iteration of a model:

  • Click of an iteration of a model. In the dialog box displayed, click Yes.
  • Specify the Device ID, Model ID, and Iteration ID in the upper left corner of the page, and click Export.

Name

API name

Iteration ID

Iteration ID

FP Start

FP start time, in μs.

Iteration End

End time of each iteration, in μs.

Iteration Time

Iteration duration, in μs.

Stream {ID}

Stream ID.

AI Core task

Timeline of AI Core tasks within each stream.

AI CPU task

Timeline of AI CPU tasks within each stream.

Other task

Timeline of other tasks within each stream.

Start Time

Time when the AI Core task, AI CPU task, or other task starts to run, in μs.

End Time

Time when the AI Core task, AI CPU task, or other task stops running, in μs.

Duration

Time consumed by the AI Core task, AI CPU task, or other task, in μs.

Status

Running status of the AI Core task, AI CPU task, or other task.

Task Type

Type of the AI Core task, AI CPU task, or other task.

Stream ID

Stream ID of the AI Core task, AI CPU task, or other task.

Op Name

Operator name.

Task ID

ID of the AI Core task, AI CPU task, or other task.

Note: When you move the cursor over a timeline, Name, Iteration ID, FP Start, Iteration End, Iteration Time, Start Time, End Time, Duration, Status, Task Type, Stream ID, Op Name, and Task ID are displayed.