MatmulConfig

Configures Matmul template information and related parameters. If the MatmulConfig template parameter is not set, the Norm template is enabled by default. For details, see Table 2. MatmulConfig can be defined in the following ways:

Table 1 Template features

Template

Implementation

Advantages

Recommended Usage

Norm

L1 can cache multiple base blocks. MTE2 moves base blocks from GM to L1 for multiple times, with one base block moved each time. The moved base blocks are not cleared. For example, if depthA1 is set to 6, six base blocks of matrix A are moved to L1, one base block is moved at a time, and MTE2 moves blocks for six times.

The MTE1 pipeline can be started in advance, because the subsequent computation of MTE1 can be performed after one base block is moved.

The Norm template is enabled by default.

MDL, SpecialMDL

L1 can cache multiple base blocks. The data movement of MTE2 from GM to L1 is a one-time "large-packet" movement. For example, if depthA1 is set to 6, six base blocks of matrix A are moved to L1 at a time, and MTE2 moves blocks once.

In common large-shape scenarios, this can reduce MTE2 cyclic movement to improve performance.

Large-shape scenario.

IBShare

In the MIX scenario, when the GM addresses of matrix A or matrix B of multiple AIVs are the same, L1 Buffer is shared to reduce MTE2 movement.

This reduces MTE2 movement and improves performance.

The GM addresses of matrix A or matrix B of multiple AIVs are the same in the MIX scenario.

Note: To use the IBShare template, the matrix A or matrix B reused by multiple AIVs must be fully loaded on L1 Buffer.

BasicBlock

If there is no tail block and the base block size is fixed, the GetBasicConfig API can be used to configure the size of input base blocks, and fix the size of the matrix moved by MTE1 each time and the size of the matrix computed by MMAD each time to reduce the parameter computation workload.

This reduces the parameter computation overhead during MTE1 matrix movement and MMAD matrix computation.

There is no tail block, and the base block size (baseM, baseN) is fixed.

Table 2 MatmulConfig parameters

Parameter

Description

Supported Templates: Norm, MDL, SpecialMDL, IBShare, and BasicBlock

doNorm

Whether to enable the Norm template. Values:

  • true: enables the Norm template.
  • false: disables the Norm template.

If no value is specified, the Norm template is enabled by default.

Norm

doBasicBlock

Whether to enable the BasicBlock template. Values:

  • true: enables the BasicBlock template.
  • false: disables the BasicBlock template.

When GetBasicConfig is called to obtain the BasicBlock template, this parameter is set to true. Note:

  • Currently, the BasicBlock template supports only matrices A and B whose input is of the half, bfloat16_t, or float type rather than the int8 or int4 type.
  • Currently, the BasicBlock template does not support matrix A in scalar or vector format.
  • Currently, the BasicBlock template does not support the ScheduleType::OUTER_PRODUCT data movement mode.

BasicBlock

doMultiDataLoad

Whether to enable the MDL template. Values:

  • true: enables the MDL template.
  • false: disables the MDL template.

MDL

doSpecialMDL

Whether to enable the SpecialMDL template. Values:

  • true: enables the SpecialMDL template.
  • false: disables the SpecialMDL template.

It is also an MDL template in essence. When the MDL template is not fully loaded in the Matmul K direction (singleCoreK/baseK > stepKb), stepN can only be set to 1. If true, stepN can be set to 2.

SpecialMDL

doIBShareNorm

Whether to enable the IBShare template. Values:

  • false (default): disables the IBShare template.
  • true: enables the IBShare template.

IBShare is used to reuse the same matrix A or B data on L1. After IBShare is enabled, repeated data movement to L1 can be avoided for data reuse.

IBShare

intrinsicsCheck

Whether to enable cyclic data move-in when the inner axis (last axis) of the left or right matrix on a single core is greater than or equal to 65535. For example, for the left matrix A [M, K], if singleCoreK of the inner axis on a single core is greater than 65535 and this parameter is set to true, data is moved in cyclically in the API. Values:

  • false (default): When the inner axis of the left or right matrix on a single core is greater than or equal to 65535, data is not moved in cyclically.
  • true: When the inner axis of the left or right matrix on a single core is greater than or equal to 65535, data is moved in cyclically.

All templates

isNBatch

Whether to enable multi-batch input and output. This parameter is valid only for BatchMatmul. Values:

  • false (default): disables multi-batch input and output.
  • true: enables the multi-batch function.

All templates

enVecND2NZ

Whether to enable ND2NZ (converting data from ND format to NZ format) using the vector. To enable this function, you need to set SetLocalWorkspace. Values:

  • false (default): disables ND2NZ using the vector.
  • true: enables ND2NZ using the vector.

All templates

enableInit

Whether to enable the Init function. If the Init function is disabled, the constant propagation effect can be improved and the performance can be optimized. By default, it is enabled.

All templates

batchMode

Relationship between the total amount of multi-batch data for input matrices A and B in a BatchMatmul operation and the size of L1 Buffer when the layout mode is set to NORMAL. Values:

  • BatchMode::BATCH_LESS_THAN_L1: Total amount of multi-batch data < Size of L1 Buffer
  • BatchMode::BATCH_LARGE_THAN_L1: Total amount of multi-batch data > Size of L1 Buffer
  • BatchMode::SINGLE_LARGE_THAN_L1: Total amount of single-batch data > Size of L1 Buffer

Norm, IBShare

enUnitFlag

Whether to enable the unitflag function to allow parallel execution of computation and data movement for performance improvement. By default, the function is enabled when the Norm and IBShare templates are used and disabled when the MDL template is used. Values:

  • false: disables the unitflag function.
  • true: enables the unitflag function.

MDL, Norm, IBShare

isPerTensor

Whether quantization for matrix B is conducted per tensor (true) or per channel (false) in the scenario where matrix A's input type is half and matrix B's input type is int8.

MDL, SpecialMDL

hasAntiQuantOffset

Whether to use the offset coefficient when matrix B quantization is enabled in the scenario where matrix A's input type is half and matrix B's input type is int8.

MDL, SpecialMDL

doMTE2Preload

Whether to enable the preloading function in the M/N direction when MTE2 pipeline gap and the M/N value are large. After this function is enabled, the MTE2 pipeline gap is reduced and the performance is improved. The preloading function is valid only for the MDL template. Values:

  • 0 (default): disables the function.
  • 1: enables preloading in the M direction.
  • 2: enables preloading in the N direction.

Note: When preloading in the M/N direction is enabled, ensure that the data is fully loaded in the K direction and double buffering is enabled in the M/N direction.

MDL, SpecialMDL

enableReuse

Whether dataPtr in the callback function set by SetSelfDefineData directly passes computation data. The parameter values are as follows:

  • true: passes computation data. Only a single value is supported.
  • false: passes data address information stored on GM.

Norm, MDL

enableUBReuse

Whether to enable Unified Buffer reuse. Values:

  • true: enables Unified Buffer reuse.
  • false: disables Unified Buffer reuse.

MDL

enableL1CacheUB

Whether to cache Unified Buffer computing blocks in L1 Buffer. Values:

  • true: caches Unified Buffer computing blocks in L1 Buffer.
  • false: does not cache Unified Buffer computing blocks in L1 Buffer.

To cache Unified Buffer computing blocks in L1 Buffer, you must call SetMatmulConfigParams in the tiling implementation to configure related information.

MDL

enableDoubleCache

Whether to cache two blocks in L1 Buffer after the IBShare template is enabled. Note that the size of the base block must be controlled to prevent the size of the two blocks from exceeding the L1 Buffer size limit. The values are as follows:

  • false (default): caches one block in L1 Buffer.
  • true: caches two blocks in L1 Buffer.

IBShare

IterateOrder

Iteration order for Matmul to perform matrix computation. The meaning of this parameter is the same as that of iterateOrder in Table 1. This parameter is valid only when ScheduleType is set to ScheduleType::OUTER_PRODUCT or 1. Values:

1
2
3
4
5
enum class IterateOrder {
    ORDER_M = 0,   // Offset to the M-axis direction and then to the N-axis direction.
    ORDER_N,       // Offset to the N-axis direction and then to the M-axis direction.
    UNDEF,         // Invalid currently.
};

Note: When the Norm template (Matmul scenario) and the MDL template are used, if IterateOrder is set to ORDER_M, the value of stepN in the TCubeTiling structure must be greater than 1; if IterateOrder is set to ORDER_N, the value of stepM in the TCubeTiling structure must be greater than 1.

Norm, MDL

ScheduleType

Matmul data movement mode. Values:

  • ScheduleType::INNER_PRODUCT or 0 (default): performs MTE1 cyclic movement in the K direction.
  • ScheduleType::OUTER_PRODUCT or 1: performs MTE1 cyclic movement in the M or N direction. This parameter must be used together with the IterateOrder parameter. Its configuration takes effect only in the Norm template (BatchMatmul and Matmul scenarios) and the MDL template.
    • If the value of IterateOrder is set to ORDER_M, cyclic movement is performed in the N direction, that is, data in matrix B is moved in parallel using MTE1. (The performance may be improved when the value of singleCoreN is greater than that of baseN.)
    • If the value of IterateOrder is set to ORDER_N, cyclic movement is performed in the M direction, that is, data in matrix A is moved in parallel using MTE1. (The performance may be improved when the value of singleCoreM is greater than that of baseM.)
    • The cyclic movement in the M direction and N direction cannot be enabled at the same time.

Note:

  • In the Norm template (BatchMatmul scenario) or the MDL template, when singleCoreK is greater than baseK, the value of ScheduleType::OUTER_PRODUCT cannot be enabled and the default mode must be used.
  • This parameter can be set to ScheduleType::OUTER_PRODUCT or 1 only when the MDL template calls IterateAll for computation.
  • This parameter can be set to ScheduleType::OUTER_PRODUCT or 1 only when matrix C is output to GM.

Norm, MDL

enableStaticPadZeros

Whether to automatically pad zeros based on the sizes of singleM, singleN, singleK, baseM, baseN, and baseK when the static tiling parameters are used and the left and right matrices are moved to L1 Buffer. For details about the static tiling parameters, see GetMatmulApiTiling.

Only the ND2NZ format of the GM input supports padding zeros. In other scenarios, you need to pad zeros manually. Values:

  • false (default): does not pad zeros automatically during data movement. You need to pad zeros manually.
  • true: automatically pads zeros based on the sizes of constant singleM/singleN/singleK and baseM/baseN/baseK during data movement.

Norm

isBiasBatch

Whether the bias size involves batch axes in the BatchMatmul scenario. Values:

  • true (default): The bias size involves batch axes. The bias size is equal to the product of batch size and N.
  • false: The bias size does not involve batch axes. The bias size is N. Bias is reused in the BatchMatmul computation.

Norm

basicM

Equivalent to baseM. Length of the M axis of a base block during Matmul computation. The unit is element.

BasicBlock

basicN

Equivalent to baseN. Length of the N axis of a base block during Matmul computation. The unit is element.

BasicBlock

basicK

Equivalent to baseK. Length of the K axis of a base block during Matmul computation. The unit is element.

BasicBlock

enableSetBias

Whether to compute bias. This parameter can be used to optimize performance. Values:

  • true: enables bias computation (default value). If the input contains bias, data with bias is moved and computed during implementation.
  • false: disables bias computation. The code related to bias processing is deleted during compilation to optimize performance.

MDL

enableEnd

Whether to call the End function during Matmul computation. This parameter can be used to optimize performance. Values:

  • true (default): The End function needs to be called during Matmul computation.
  • false: The End function does not need to be called. The code related to End processing is deleted during compilation to optimize performance. For example, if the End function does not need to be called in the asynchronous scenario, set this parameter to false.

All templates

enableGetTensorC

Whether to call the GetTensorC function during Matmul computation. This parameter can be used to optimize performance. Values:

  • true (default): The GetTensorC function needs to be called during Matmul computation.
  • false: The GetTensorC function does not need to be called. The code related to GetTensorC processing is deleted during compilation to optimize performance.

All templates

enableSetOrgShape

Whether to call the SetOrgShape function during Matmul computation. This parameter can be used to optimize performance. Values:

  • true (default): The SetOrgShape function needs to be called during Matmul computation.
  • false: The SetOrgShape function does not need to be called. The code related to SetOrgShape processing is deleted during compilation to optimize performance.

All templates

enableSetTail

Whether to call the SetTail function during Matmul computation. This parameter can be used to optimize performance. Values:

  • true (default): The SetTail function needs to be called during Matmul computation.
  • false: The SetTail function does not need to be called. The code related to SetTail processing is deleted during compilation to optimize performance.

All templates

enableQuantVector

Whether to call the SetQuantVector and SetQuantScalar functions during Matmul computation. This parameter can be used to optimize performance. Values:

  • true (default): The SetQuantVector and SetQuantScalar functions need to be called during Matmul computation.
  • false: The SetQuantVector and SetQuantScalar functions do not need to be called. The code related to SetQuantVector and SetQuantScalar processing is deleted during compilation to optimize performance.

All templates

enableSetDefineData

Whether to enable the setting of information such as the computation data required by the callback function or the data address stored on GM when MatmulCallBack (custom callback function) is enabled.

Values:

  • true (default): The setting is allowed.
  • false: The setting is not allowed. The code related to SetSelfDefineData processing is deleted during compilation to optimize performance.

MDL

iterateMode

Iteration mode, used to optimize the Matmul computation overhead in the separated architecture. Specifically, it is used for the optimization through Iterate APIs (including Iterate, IterateAll, IterateBatch, and IterateNBatch). When a mode is enabled, only one Iterate API corresponding to the mode is called during the Matmul computation, and the code related to other Iterate APIs is deleted during compilation to optimize performance. This parameter is of the IterateMode type. Values:

  • ITERATE_MODE_NORMAL: For Iterate APIs, only Iterate is called during Matmul computation.
  • ITERATE_MODE_ALL: For Iterate APIs, only IterateAll is called during Matmul computation.
  • ITERATE_MODE_BATCH: For Iterate APIs, only IterateBatch is called during Matmul computation.
  • ITERATE_MODE_N_BATCH: For Iterate APIs, only IterateNBatch is called during Matmul computation.
  • ITERATE_MODE_DEFAULT (default): The number of Iterate APIs to be called is not limited, and the optimization of the computation overhead is disabled.

All templates

intraBlockPartSum

Whether to enable the accumulation of a single compute result (matrix slices with the size of baseM x baseN) of two AIV cores on L0C Buffer in the case of fused vector and cube computation on the separated architecture. Values:

  • false (default): The compute results of two AIV cores are not accumulated on L0C Buffer.
  • true: The compute results of two AIV cores are accumulated on L0C Buffer.

Norm

isPartialOutput

Whether to enable the PartialOutput function, that is, whether to control the base block computation mode of the Matmul sequential output in the K direction. In other words, this parameter determines whether to reduce the K axis of an Iterate computation of Matmul. Values:

  • true: enables the PartialOutput function. The K axis of an Iterate computation is not reduced. Each time the Matmul computation is performed, matrix slices with the size of baseM x baseN and partial baseK are output.
  • false: disables the PartialOutput function. The K axis of an Iterate computation is reduced. Each time the Matmul computation is performed, matrix slices with the size of baseM x baseN and length SingleCoreK are output.

MDL

doSpecialBasicBlock

Whether to enable the SpecialBasicBlock template. Values:

  • true: enables the SpecialBasicBlock template.
  • false: disables the SpecialBasicBlock template.

It is also a BasicBlock template, but eliminates scalar computation of overhead.

Reserved

singleCoreM

Shape size of a single core on the M axis, in elements.

Reserved

singleCoreN

Shape size of a single core on the N axis, in elements.

Reserved

singleCoreK

Shape size of a single core on the K axis, in elements.

Reserved

stepM

A multiple of baseM of the left matrix in the bufferM direction buffered in A1.

Reserved

stepN

A multiple of baseN of the right matrix in the bufferN direction buffered in B1.

Reserved

baseMN

Size of baseM × baseN.

Reserved

singleCoreMN

Size of singleCoreM × singleCoreN.

Reserved