IterateAll

Function Description

Computes a matrix C of size singleCoreM x singleCoreN by each call to IterateAll. The iteration sequence can be adjusted using the tiling parameter iterateOrder.

Prototype

1
template <bool sync = true> __aicore__ inline void IterateAll(const GlobalTensor<DstT>& gm, uint8_t enAtomic = 0, bool enSequentialWrite = false, bool waitIterateAll = false, bool fakeMsg = false)
1
template <bool sync = true> __aicore__ inline void IterateAll(const LocalTensor<DstT>& ubCmatrix, uint8_t enAtomic = 0)

Parameters

Table 1 Parameters in the template

Parameter

Description

sync

Matrix C can be obtained in synchronous or asynchronous mode.

  • Synchronous mode: Wait until IterateAll is executed.
  • Asynchronous mode: Do not need to wait until IterateAll is executed.

Setting it to true (default) enables the synchronous mode; while setting it to false enables the asynchronous mode.

Table 2 API parameters

Parameter

Input/Output

Description

gm

Input

Address for storing matrix C in the Global Memory.

ubCmatrix

Input

Address for storing matrix C in the Local Memory. TPosition can be set to TPosition::TSCM.

enAtomic

Input

Enables the Atomic operation or not.

Values:

0 (default): disables the Atomic operation.

1: enables the AtomicAdd (accumulation) operation.

2: enables the AtomicMax (maximum value calculation) operation.

3: enables the AtomicMin (minimum value calculation) operation.

enSequentialWrite

Input

Enables the continuous write mode or not (write to [baseM,baseN] for continuous write and to [singleCoreM,singleCoreN] for discontinuous write). The default value is false (discontinuous write).

waitIterateAll

Input

Used only in asynchronous scenarios, indicating whether to use WaitIterateAll to wait for the completion of IterateAll execution.

true: WaitIterateAll is used to wait for the completion of IterateAll execution.

false: WaitIterateAll is not used to wait for the completion of IterateAll execution. Developers can handle this waiting process themselves.

fakeMsg

Input

Used only in the IBShare scenario where doIBShareNorm is enabled.

IBShare is used to ensure that the same matrix A or matrix B data on L1 is reused. The number of times that AIV calls IterateAll by core must be matched. In this case, fakeMsg needs to be set to true, and no computation is performed. This ensures that IterateAll is called in pairs. The default value is false, indicating that computation is performed.

Returns

None

Availability

Precautions

Ensure that the size of the input matrix C address space is greater than or equal to singleM x singleN.

Example

1
2
3
4
5
REGIST_MATMUL_OBJ(&pipe, GetSysWorkSpacePtr(), mm, &tiling);
mm.SetTensorA(gm_a);
mm.SetTensorB(gm_b);
mm.SetBias(gm_bias);
mm.IterateAll(gm_c);    // Computation