CastDeq

Function Usage

Quantizes the input of the int16_t type and converts the precision based on scale, offset, and signMode set by SetDeqScale to obtain the data of the int8_t/uint8_t type when isVecDeq is set to false. To return a signed number, set signMode to true. To return an unsigned number, set signMode to false. The formula is as follows:

Quantizes the input of the int16_t type in loop mode and converts the precision to obtain the data of the int8_t/uint8_t type based on the 16 groups of quantization parameters scale0scale15, offset0offset15, and signMode0signMode15 set on a 128-byte UB segment by SetDeqScale when isVecDeq is true. To return a signed number, set signMode to true. To return an unsigned number, set signMode to false. The formula is as follows:

Prototype

  • Computation of the first n data elements of a tensor
    1
    2
    template <typename T1, typename T2, bool isVecDeq = true, bool halfBlock = true>
    __aicore__ inline void CastDeq(const LocalTensor<T1>& dstLocal, const LocalTensor<T2>& srcLocal, const uint32_t calCount)
    
  • High-dimensional tensor sharding computation
    • Bitwise mask mode
      1
      2
      template <typename T1, typename T2, bool isSetMask = true, bool isVecDeq = true, bool halfBlock = true>
      __aicore__ inline void CastDeq(const LocalTensor<T1>& dstLocal, const LocalTensor<T2>& srcLocal, const uint64_t mask[], uint8_t repeatTimes, const UnaryRepeatParams& repeatParams)
      
    • Contiguous mask mode
      1
      2
      template <typename T1, typename T2, bool isSetMask = true, bool isVecDeq = true, bool halfBlock = true>
      __aicore__ inline void CastDeq(const LocalTensor<T1>& dstLocal, const LocalTensor<T2>& srcLocal, const int32_t mask, uint8_t repeatTimes, const UnaryRepeatParams& repeatParams)
      

Parameters

Figure 1 Description of halfBlock
Table 1 Parameters in the template

Parameter

Description

halfBlock

Stores output elements in the upper or the lower half block. If true, the result is stored in the lower half block, and if false, the result is stored in the upper half block, as shown in Figure 1.

T1

Data type of the output tensor. The value can be int8_t or uint8_t.

This parameter is used together with the input parameter signMode of the SetDeqScale API. When signMode is set to true, the output data type is int8_t. When signMode is set to false, the output data type is uint8_t.

T2

Data type of the input tensor. The value can be int16_t.

isVecDeq

This parameter is used together with SetDeqScale(const LocalTensor<T>& src). When a tensor is passed through SetDeqScale, isVecDeq must be true.

isSetMask

Indicates whether to set mask inside the API.

  • true: sets mask inside the API.
  • false: sets mask outside the API. Developers need to use the SetVectorMask API to set the mask value. In this mode, the mask value in the input parameter of this API must be set to MASK_PLACEHOLDER.
Table 2 API parameters

Parameter

Input/Output

Description

dstLocal

Output

Destination operand.

The type is LocalTensor, and the supported TPosition is VECIN, VECCALC, or VECOUT.

The start address of the LocalTensor must be 32-byte aligned.

srcLocal

Input

Source operand.

The type is LocalTensor, and the supported TPosition is VECIN, VECCALC, or VECOUT.

The start address of the LocalTensor must be 32-byte aligned.

mask

Input

mask is used to control the elements that participate in computation in each iteration.

  • Contiguous mode: indicates the number of contiguous elements that participate in computation. The value range is related to the operand data type. The maximum number of elements that can be processed in each iteration varies according to the data type. When the operand is 16-bit, mask ∈ [1, 128]. When the operand is 32-bit, mask ∈ [1, 64]. When the operand is 64-bit, mask ∈ [1, 32].
  • Bitwise mode: controls the elements that participate in computation by bit. If a bit is set to 1, the corresponding element participates in the computation. If a bit is set to 0, the corresponding element is masked in the computation. The parameter type is a uint64_t array whose length is 2.

    For example, if mask = [0, 8] and 8 = 0b1000, only the fourth element participates in computation.

    The parameter value range is related to the operand data type. The maximum number of elements that can be processed in each iteration varies according to the data type. When the operand is 16-bit, mask[0] and mask[1] ∈ [0, 264 -1] and cannot be 0 at the same time. When the operand is 32-bit, mask[1] is 0 and mask[0] ∈ (0, 264 – 1]. When the operand is 64-bit, mask[1] is 0 and mask[0] ∈ (0, 232 – 1].

When the number of bits of the source operand is different from that of the destination operand, the data type with more bytes is used for the computation. For example, if the source operand is of the half type and the destination operand is of the int32_t type, int32_t is used to compute mask.

repeatTimes

Input

Number of iteration repeats. The Vector Unit reads 256 bytes of contiguous data for computation each time. To read the complete data for processing, the unit needs to read the input data in multiple repeats. repeatTimes indicates the number of iterations.

For details about this parameter, see Common Parameters.

repeatParams

Input

Parameters that control the operand address strides. They are of the UnaryRepeatParams type, and contain such parameters as those that specify the address stride of the operand for the same data block between adjacent iterations and address stride of the operand between different data blocks in a single iteration.

For details about the address stride of the operand between adjacent iterations, see repeatStride. For details about the address stride of the operand between different data blocks in a single iteration, see dataBlockStride.

calCount

Input

Number of elements of the input data.

Returns

None

Availability

Precautions

  • For details about the alignment requirements of the operand address offset, see General Restrictions.
  • repeatTimes ∈ [0,255]
  • The parallelism degree in each repeat depends on the data precision and AI processor model. For example, 128 source or destination elements are operated in each repeat during s16 to s8/u8 conversion.
  • To save memory space, you can define a tensor shared by the source and destination operands (by address overlapping). The general instruction restrictions are as follows.
    • For a single repeat (repeatTimes = 1), the source operand must completely overlap the destination operand.
    • For multiple repeats (repeatTimes > 1), if there is a dependency between the source operand and the destination operand, that is, the destination operand of the Nth iteration is the source operand of the (N + 1)th iteration, address overlapping is not allowed.

Example

To run the sample code, copy the code snippet and replace some code of the Compute function in Template Sample.

  • Example of high-dimensional sharding computation API - contiguous mask mode
    1
    2
    3
    4
    5
    uint64_t mask = 256 / sizeof(int16_t);
    // repeatTimes = 2, 128 elements one repeat, 256 elements total
    // dstBlkStride, srcBlkStride = 1, no gap between blocks in one repeat
    // dstRepStride, srcRepStride = 8, no gap between repeats
    AscendC::CastDeq<uint8_t, int16_t, true, true, true>(dstLocal, srcLocal, mask, 2, { 1, 1, 8, 8 });
    
  • Example of high-dimensional sharding computation API - bitwise mask mode
    1
    2
    3
    4
    5
    uint64_t mask[2] = { UINT64_MAX, UINT64_MAX };
    // repeatTimes = 2, 128 elements one repeat, 256 elements total
    // dstBlkStride, srcBlkStride = 1, no gap between blocks in one repeat
    // dstRepStride, srcRepStride = 8, no gap between repeats
    AscendC::CastDeq<uint8_t, int16_t, true, true, true>(dstLocal, srcLocal, mask, 8, { 1, 1, 8, 8 });
    
  • Example of the API for computing the first n data elements
    1
    AscendC::CastDeq<uint8_t, int16_t, true, true>(dstLocal, srcLocal, 256);
    

Result example:

Input (srcLocal):
[20 53 26 12 36  6 20 93 66 30 56 99 59 92  7 37 22 47 98 10 85 29 14 46
 17 34 45 17 25 45 82 17 66 94 68 23 67  8 89  8 92  6 10 80 87 20  9 81
 70 62 11 58 38 83 32 14 38 47 41 63 94 26 96 89 88 35 86 55 60 82 15 65
 92 67 83 23 63 25 85 93 50 91 75 60 80 10 55 20 71 14 67 23 31 63  7 93
 69 45 61 23 43 86 11 81 81 36 76 58 53 25 23 51 59 78 82 10 39 40 24 50
 68 49 79 40  4 53 22 38 45 17 29 54  9 66 98 47 12 47 47 20 98  0 59 77
  1 21 39 70 66 20 68  8 77 77 54  0  3 33 37 37 48 60 83 88 27 70 31 49
 75 21 59  3 99 84 92 84 14 44 26 56 72 56 37 52 39 11  2 59 59 65 71 64
 10 65 62 48 42 79 69 69 27 99  8 38 36 77 34 34 60 50 52 50 41 31 95 68
 27 16 42 64 19 47  0 10 36 36 33 62 98 64 32 81 49 53 27 70 35  9 63  7
 10 89  3 39 94 23 89 16 23 60 71 42 46 58 65 90]
Output (dstLocal):
[ 0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0 20 53 26 12 36  6 20 93
 66 30 56 99 59 92  7 37  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0
 22 47 98 10 85 29 14 46 17 34 45 17 25 45 82 17  0  0  0  0  0  0  0  0
  0  0  0  0  0  0  0  0 66 94 68 23 67  8 89  8 92  6 10 80 87 20  9 81
  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0 70 62 11 58 38 83 32 14
 38 47 41 63 94 26 96 89  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0
 88 35 86 55 60 82 15 65 92 67 83 23 63 25 85 93  0  0  0  0  0  0  0  0
  0  0  0  0  0  0  0  0 50 91 75 60 80 10 55 20 71 14 67 23 31 63  7 93
  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0 69 45 61 23 43 86 11 81
 81 36 76 58 53 25 23 51  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0
 59 78 82 10 39 40 24 50 68 49 79 40  4 53 22 38  0  0  0  0  0  0  0  0
  0  0  0  0  0  0  0  0 45 17 29 54  9 66 98 47 12 47 47 20 98  0 59 77
  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  1 21 39 70 66 20 68  8
 77 77 54  0  3 33 37 37  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0
 48 60 83 88 27 70 31 49 75 21 59  3 99 84 92 84  0  0  0  0  0  0  0  0
  0  0  0  0  0  0  0  0 14 44 26 56 72 56 37 52 39 11  2 59 59 65 71 64
  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0 10 65 62 48 42 79 69 69
 27 99  8 38 36 77 34 34  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0
 60 50 52 50 41 31 95 68 27 16 42 64 19 47  0 10  0  0  0  0  0  0  0  0
  0  0  0  0  0  0  0  0 36 36 33 62 98 64 32 81 49 53 27 70 35  9 63  7
  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0 10 89  3 39 94 23 89 16
 23 60 71 42 46 58 65 90]

Template Sample

This section provides a template sample to help you quickly run reference instruction samples.

You can use the following template sample as the code framework and only need to copy the sample snippet in specific instructions to replace the content in bold.
#include "kernel_operator.h"
template <typename srcType, typename dstType>
class KernelCastDeq {
public:
    __aicore__ inline KernelCastDeq() {}
    __aicore__ inline void Init(GM_ADDR src_gm, GM_ADDR dst_gm, uint32_t inputSize, bool halfBlock, bool isVecDeq)
    {
        srcSize = inputSize;
        dstSize = inputSize * 2;
        this->halfBlock = halfBlock;
        this->isVecDeq = isVecDeq;
        src_global.SetGlobalBuffer(reinterpret_cast<__gm__ srcType*>(src_gm), srcSize);
        dst_global.SetGlobalBuffer(reinterpret_cast<__gm__ dstType*>(dst_gm), dstSize);
        pipe.InitBuffer(inQueueX, 1, srcSize * sizeof(srcType));
        pipe.InitBuffer(outQueue, 1, dstSize * sizeof(dstType));
        pipe.InitBuffer(tmpQueue, 1, 128);
    }
    __aicore__ inline void Process()
    {
        CopyIn();
        Compute();
        CopyOut();
    }
private:
    __aicore__ inline void CopyIn()
    {
        AscendC::LocalTensor<srcType> srcLocal = inQueueX.AllocTensor<srcType>();
        AscendC::DataCopy(srcLocal, src_global, srcSize);
        inQueueX.EnQue(srcLocal);
    }
    __aicore__ inline void Compute()
    {
        AscendC::LocalTensor<dstType> dstLocal = outQueue.AllocTensor<dstType>();
        AscendC::LocalTensor<uint64_t> tmpBuffer = tmpQueue.AllocTensor<uint64_t>();
        AscendC::Duplicate(tmpBuffer.ReinterpretCast<int32_t>(), static_cast<int32_t>(0), 32);
        AscendC::PipeBarrier<PIPE_V>();
        AscendC::Duplicate<int32_t>(dstLocal.template ReinterpretCast<int32_t>(), static_cast<int32_t>(0), dstSize / sizeof(int32_t));
        AscendC::PipeBarrier<PIPE_ALL>();
        bool signMode = false;
        if constexpr (AscendC::IsSameType<dstType, int8_t>::value) {
            signMode = true;
        }
        AscendC::LocalTensor<srcType> srcLocal = inQueueX.DeQue<srcType>();
        if (halfBlock) {
            if (isVecDeq) {
                float vdeqScale[16] = { 1.0 };
                int16_t vdeqOffset[16] = { 0 };
                bool vdeqSignMode[16] = { signMode };
                AscendC::VdeqInfo vdeqInfo(vdeqScale, vdeqOffset, vdeqSignMode);
                AscendC::SetDeqScale(tmpBuffer, vdeqInfo);
                AscendC::CastDeq<dstType, srcType, true, true>(dstLocal, srcLocal, srcSize);
            } else {
                float scale = 1.0;
                int16_t offset = 0;
                AscendC::SetDeqScale(scale, offset, signMode);
                AscendC::CastDeq<dstType, srcType, false, true>(dstLocal, srcLocal, srcSize);
            }
        } else {
            if (isVecDeq) {
                float vdeqScale[16] = { 1.0 };
                int16_t vdeqOffset[16] = { 0 };
                bool vdeqSignMode[16] = { signMode };
                AscendC::VdeqInfo vdeqInfo(vdeqScale, vdeqOffset, vdeqSignMode);
                AscendC::SetDeqScale(tmpBuffer, vdeqInfo);
                AscendC::CastDeq<dstType, srcType, true, false>(dstLocal, srcLocal, srcSize);
            } else {
                float scale = 1.0;
                int16_t offset = 0;
                AscendC::SetDeqScale(scale, offset, signMode);
                AscendC::CastDeq<dstType, srcType, false, false>(dstLocal, srcLocal, srcSize);
            }
        }
        outQueue.EnQue<dstType>(dstLocal);
        tmpQueue.FreeTensor(tmpBuffer);
        inQueueX.FreeTensor(srcLocal);
    }
    __aicore__ inline void CopyOut()
    {
        AscendC::LocalTensor<dstType> dstLocal = outQueue.DeQue<dstType>();
        AscendC::DataCopy(dst_global, dstLocal, dstSize);
        outQueue.FreeTensor(dstLocal);
    }
private:
    AscendC::GlobalTensor<srcType> src_global;
    AscendC::GlobalTensor<dstType> dst_global;
    AscendC::TPipe pipe;
    AscendC::TQue<AscendC::QuePosition::VECIN, 1> inQueueX;
    AscendC::TQue<AscendC::QuePosition::VECIN, 1> tmpQueue;
    AscendC::TQue<AscendC::QuePosition::VECOUT, 1> outQueue;
    bool halfBlock = false;
    bool isVecDeq = false;
    uint32_t srcSize = 0;
    uint32_t dstSize = 0;
};
template <typename srcType, typename dstType>
__aicore__ void kernel_cast_deqscale_operator(GM_ADDR src_gm, GM_ADDR dst_gm, uint32_t dataSize, bool halfBlock, bool isVecDeq)
{
    KernelCastDeq<srcType, dstType> op;
    op.Init(src_gm, dst_gm, dataSize, halfBlock, isVecDeq);
    op.Process();
}
extern "C" __global__ __aicore__ void kernel_cast_deqscale_operator_256_int16_t_uint8_t_true_true(GM_ADDR src_gm, GM_ADDR dst_gm)
{
    kernel_cast_deqscale_operator<int16_t, uint8_t>(src_gm, dst_gm, 256, true, true);
}