BlockReduceSum
Function Usage
Sums all elements in each data block. Source operands are added in binary tree mode. For details about reduction instructions, see Reduction Instructions.
To sum up 128 pieces of half-type data, sum up 8 data blocks, with each data block containing 16 pieces of half-type data. In each data block, data is added in binary tree mode. The following figure shows the schematic diagram of BlockReduceSum.
When being greater than 65504, the computation result is truncated to 65504. For example, the source operand is [60000, 60000, –30000, 100], 60000 + 60000 > 65504, meaning that the result overflows. In this case, the maximum value 65504 will be used as the result. Similarly, –30000 + 100 = –29900, 65504 – 29900 = 35604. The following figure shows the computation.
Prototype
- Bitwise mask mode
1 2
template <typename T, bool isSetMask = true> __aicore__ inline void BlockReduceSum (const LocalTensor<T>& dstLocal, const LocalTensor<T>& srcLocal,const int32_t repeat, const uint64_t mask[], const int32_t dstRepStride, const int32_t srcBlkStride, const int32_t srcRepStride)
- Contiguous mask mode
1 2
template <typename T, bool isSetMask = true> __aicore__ inline void BlockReduceSum (const LocalTensor<T>& dstLocal, const LocalTensor<T>& srcLocal,const int32_t repeat, const int32_t maskCount, const int32_t dstRepStride, const int32_t srcBlkStride, const int32_t srcRepStride)
Parameters
|
Parameter |
Description |
|---|---|
|
T |
Operand data type. For the |
|
isSetMask |
Indicates whether to set mask inside the API.
|
|
Parameter |
Input/Output |
Description |
|---|---|---|
|
dstLocal |
Output |
Destination operand. The type is LocalTensor, and the supported TPosition is VECIN, VECCALC, or VECOUT. The start address of the LocalTensor must be 16-byte aligned (for data of the half type) or 32-byte aligned (for data of the float type). |
|
srcLocal |
Input |
Source operand. The type is LocalTensor, and the supported TPosition is VECIN, VECCALC, or VECOUT. The start address of the LocalTensor must be 32-byte aligned. |
|
repeat |
Input |
Number of iteration repeats. The value range is [0, 255]. |
|
mask[2]/ maskCount |
Input |
mask is used to control the elements that participate in computation in each iteration.
|
|
dstRepStride |
Input |
Address stride between adjacent iterations of the destination operand. The unit is the length after reduction of a repeat. After each repeat (eight data blocks) is reduced, eight elements are obtained. Therefore, when the input type is half, the unit of RepStride is 16 bytes. When the input type is float, the unit of RepStride is 32 bytes. Note that this parameter cannot be set to 0 for the |
|
srcBlkStride |
Input |
Address stride of data blocks in a single iteration. For details, see dataBlockStride. |
|
srcRepStride |
Input |
Address stride between adjacent iterations of the source operand, that is, the number of data blocks skipped of the source operand in each iteration. For details, see repeatStride. |
Returns
None
Availability
Precautions
- To save memory space, you can define a tensor shared by the source and destination operands (by address overlapping). Note that the computed destination operand data cannot overwrite the source operands that are not involved in the computation. Exercise caution when defining the tensor.
- For details about the alignment requirements of the operand address offset, see General Restrictions.
Example
- This example shows only part of the code used in the computation process. To run the sample code, copy the code snippet and replace part of code of the Compute function in Template Sample.
- BlockReduceSum – Example of high-dimensional tensor sharding computation (contiguous mask mode)
1 2 3 4 5 6
uint64_t mask = 256/sizeof(half); int repeat = 1; // repeat = 1, 128 elements one repeat, 128 elements total // srcBlkStride = 1, no gap between blocks in one repeat // dstRepStride = 1, srcRepStride = 8, no gap between repeats AscendC::BlockReduceSum<half>(dstLocal, srcLocal, repeat, mask, 1, 1, 8);
- BlockReduceSum – Example of high-dimensional tensor sharding computation (bitwise mask mode)
1 2 3 4 5 6
uint64_t mask[2] = { UINT64_MAX, UINT64_MAX }; int repeat = 1; // repeat = 1, 128 elements one repeat, 128 elements total // srcBlkStride = 1, no gap between blocks in one repeat // dstRepStride = 1, srcRepStride = 8, no gap between repeats AscendC::BlockReduceSum<half>(dstLocal, srcLocal, repeat, mask, 1, 1, 8);
Result example:Input (src_gm): [-7.289, 4.48, -5.898, -6.199, 1.422, -6.168, -3.178, -1.198, 7.789, 6.754, -5.191, -0.6797, 2.883, 2.08, 8.664, -8.539, ..., -7.625, 2.529, 7.855, -2.012, -6.52, -6.652, -8.422, -9.914, -4.355, 1.849, 5.406, 1.483, -6.074, -1.897, 8.625, 1.969] Output (dst_gm): [-10.27, ..., -23.77, 0, ..., 0]
- BlockReduceSum – Example of high-dimensional tensor sharding computation (contiguous mask mode)