PairReduceSum

Function Usage

PairReduceSum: sums up two adjacent (odd and even) elements, for example, (a1, a2, a3, a4, a5, a6, ...). The result is (a1+a2, a3+a4, a5+a6, ...). For details about reduction instructions, see Reduction Instructions.

Prototype

  • Bitwise mask mode
    1
    2
    template <typename T, bool isSetMask = true>
    __aicore__ inline void PairReduceSum(const LocalTensor<T>& dstLocal, const LocalTensor<T>& srcLocal,const int32_t repeat, const uint64_t mask[], const int32_t dstRepStride, const int32_t srcBlkStride,const int32_t srcRepStride)
    
  • Contiguous mask mode
    1
    2
    template <typename T, bool isSetMask = true>
    __aicore__ inline void PairReduceSum(const LocalTensor<T>& dstLocal, const LocalTensor<T>& srcLocal,const int32_t repeat, const int32_t maskCount, const int32_t dstRepStride, const int32_t srcBlkStride,const int32_t srcRepStride)
    

Parameters

Table 1 Parameters in the template

Parameter

Description

T

Operand data type.

For the Atlas Training Series Product , the supported data type is half.

isSetMask

Indicates whether to set mask inside the API.

  • true: sets mask inside the API.
  • false: sets mask outside the API. Developers need to use the SetVectorMask API to set the mask value. In this mode, the mask value in the input parameter of this API must be set to MASK_PLACEHOLDER.
Table 2 Parameters

Parameter

Input/Output

Description

dstLocal

Output

Destination operand.

The type is LocalTensor, and the supported TPosition is VECIN, VECCALC, or VECOUT.

The start address of the LocalTensor must be 32-byte aligned.

srcLocal

Input

Source operand.

The type is LocalTensor, and the supported TPosition is VECIN, VECCALC, or VECOUT.

The start address of the LocalTensor must be 32-byte aligned.

repeat

Input

Number of iteration repeats. The value range is [0, 255].

For details about this parameter, see Common Parameters.

mask[2]/ maskCount

Input

mask is used to control the elements that participate in computation in each iteration.

  • Contiguous mode: indicates the number of contiguous elements that participate in computation. The value range is related to the operand data type. The maximum number of elements that can be processed in each iteration varies according to the data type. When the operand is 16-bit, mask ∈ [1, 128]. When the operand is 32-bit, mask ∈ [1, 64]. When the operand is 64-bit, mask ∈ [1, 32].
  • Bitwise mode: controls the elements that participate in computation by bit. If a bit is set to 1, the corresponding element participates in the computation. If a bit is set to 0, the corresponding element is masked in the computation. The parameter type is a uint64_t array whose length is 2.

    For example, if mask = [0, 8] and 8 = 0b1000, only the fourth element participates in computation.

    The parameter value range is related to the operand data type. The maximum number of elements that can be processed in each iteration varies according to the data type. When the operand is 16-bit, mask[0] and mask[1] ∈ [0, 264 -1] and cannot be 0 at the same time. When the operand is 32-bit, mask[1] is 0 and mask[0] ∈ (0, 264 – 1]. When the operand is 64-bit, mask[1] is 0 and mask[0] ∈ (0, 232 – 1].

dstRepStride

Input

Address stride between adjacent iterations of the destination operand. The unit is the length after reduction of a repeat. After PairReduce is complete, the length of a repeat is halved. That is, the unit is 128 bytes.

Note that this parameter cannot be set to 0 for the Atlas Training Series Product .

srcBlkStride

Input

Address stride of data blocks in a single iteration. For details, see dataBlockStride.

srcRepStride

Input

Address stride between adjacent iterations of the source operand, that is, the number of data blocks skipped of the source operand in each iteration. For details, see repeatStride.

Returns

None

Availability

Atlas Training Series Product

Precautions

  • For details about the alignment requirements of the operand address offset, see General Restrictions.

Example

  • PairReduceSum – Example of high-dimensional tensor sharding computation (contiguous mask mode)
    1
    2
    3
    4
    5
    6
    uint64_t mask = 256/sizeof(half);
    int repeat = 1;
    // repeat = 1, 128 elements one repeat, 128 elements total
    // srcBlkStride = 1, no gap between blocks in one repeat
    // dstRepStride = 1, srcRepStride = 8, no gap between repeats
    AscendC::PairReduceSum<half>(dstLocal, srcLocal, repeat, mask, 1, 1, 8);
    
  • PairReduceSum – Example of high-dimensional tensor sharding computation (bitwise mask mode)
    1
    2
    3
    4
    5
    6
    uint64_t mask[2] = { UINT64_MAX, UINT64_MAX };
    int repeat = 1;
    // repeat = 1, 128 elements one repeat, 128 elements total
    // srcBlkStride = 1, no gap between blocks in one repeat
    // dstRepStride = 1, srcRepStride = 8, no gap between repeats
    AscendC::PairReduceSum<half>(dstLocal, srcLocal, repeat, mask, 1, 1, 8);
    
  • Result example:
    Input (src_gm):
    [-3.441, 7.246, -0.02759, -6.324, 3.693, -7.984, -4.246, 6.332, -3.734, -2.699, -6.91, 7.887, -3.631, 5.219, 6.539, 8.688, 6.523, -6.789, -8.547, 4.258, 1.344, -8.469, -0.9253, -3.914, 3.293, -9.828, 7.082, 5.961, 2.133, 1.959, 3.928, -1.062, 9.18, -1.725, -3.645, 1.457, -2.328, -0.9487, -0.2849, -2.998, -9.281, 3.137, 0.4028, 5.961, -6.25, 2.406, -6.203, -2.699, 4.914, 1.653, -6.383, 6.855, 9.164, 0.6646, -2.854, 3.18, -0.5884, 0.4258, -5.773, -2.152, 4.258, 4.129, -8.719, -8.828, 6.145, 7.387, 1.386, -4.684, 6.324, -1.275, -1.816, 3.357, 6.832, -1.059, -9.852, -8.539, 2.938, -2.002, 9.625, -4.387, -1.309, 8.289, 2.906, -1.035, 7.723, 4.727, -6.477, 2.389, 6.75, -6.688, -0.04248, -6.613, -3.424, 7.145, 4.836, -5.617, -5.855, -5.234, -9.422, -9.852, -8.531, 2.115, 5.109, -8.094, -6.238, 9.898, -6.848, -6.051, 7.109, 4.227, -0.6187, -3.492, -4.352, 1.344, 1.526, 2.572, 2.16, -1.135, 9.812, 1.426, -8, 3.291, -2.039, 5.93, -5.52, -5.156, -9.422, 0.2236]  
    Output (dst_gm):
    [3.805, -6.352, -4.289, 2.086, -6.434, 0.9766, 1.588, 15.23, -0.2656, -4.289, -7.125, -4.84, -6.535, 13.05, 4.094, 2.865, 7.453, -2.188, -3.277, -3.283, -6.145, 6.363, -3.844, -8.906, 6.566, 0.4727, 9.828, 0.3262, -0.1626, -7.926, 8.391, -17.55, 13.53, -3.297, 5.047, 1.541, 5.773, -18.39, 0.9355, 5.238, 6.98, 1.871, 12.45, -4.086, 0.0625, -6.656, 3.721, -0.7812, -11.09, -19.28, -6.414, -2.984, 3.66, -12.9, 11.34, -4.109, -3.008, 4.098, 1.025, 11.23, -4.711, 3.891, -10.67, -9.195]
  • Complete code example:
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    #include "kernel_operator.h"
    class KernelReduce {
    public:
        __aicore__ inline KernelReduce() {}
        __aicore__ inline void Init(__gm__ uint8_t* src, __gm__ uint8_t* dstGm)
        {
            srcGlobal.SetGlobalBuffer((__gm__ half*)src);
            dstGlobal.SetGlobalBuffer((__gm__ half*)dstGm);
            pipe.InitBuffer(inQueueSrc, 1, srcDataSize * sizeof(half));
            pipe.InitBuffer(outQueueDst, 1, dstDataSize * sizeof(half));
        }
        __aicore__ inline void Process()
        {
            CopyIn();
            Compute();
            CopyOut();
        }
    private:
        __aicore__ inline void CopyIn()
        {
            AscendC::LocalTensor<half> srcLocal = inQueueSrc.AllocTensor<half>();
            AscendC::DataCopy(srcLocal, srcGlobal, srcDataSize);
            inQueueSrc.EnQue(srcLocal);
        }
        __aicore__ inline void Compute()
        {
            AscendC::LocalTensor<half> srcLocal = inQueueSrc.DeQue<half>();
            AscendC::LocalTensor<half> dstLocal = outQueueDst.AllocTensor<half>();
            half zero(0);
            AscendC::Duplicate(dstLocal, zero, dstDataSize);
            // Command execution part (replace with the preceding code)
            outQueueDst.EnQue<half>(dstLocal);
            inQueueSrc.FreeTensor(srcLocal);
        }
        __aicore__ inline void CopyOut()
        {
            AscendC::LocalTensor<half> dstLocal = outQueueDst.DeQue<half>();
            AscendC::DataCopy(dstGlobal, dstLocal, dstDataSize);
            outQueueDst.FreeTensor(dstLocal);
        }
    private:
        AscendC::TPipe pipe;
        AscendC::TQue<AscendC::QuePosition::VECIN, 1> inQueueSrc;
        AscendC::TQue<AscendC::QuePosition::VECOUT, 1> outQueueDst;
        AscendC::GlobalTensor<half> srcGlobal, dstGlobal;
        int srcDataSize = 128;
        int dstDataSize = 64;
    };
    extern "C" __global__ __aicore__ void reduce_simple_kernel(__gm__ uint8_t* src, __gm__ uint8_t* dstGm)
    {
        KernelReduce op;
        op.Init(src, dstGm);
        op.Process();
    }