GeGLU
Function Usage
GeGLU is a GLU variant that uses GELU as the activation function. The specific calculation formula is as follows, where PAR represents the number of elements that can be processed by the vector unit in one iteration:

The formula for calculating the GELU activation function is as follows:

erf in the foregoing formula is an error function: 
The error function does not have an analytical expression and utilizes the tanh approximate expression, which is widely accepted in the industry: 
The GeGLU expression can be derived by substituting the GELU approximate formula:

a equals –0.0713548162726 and b equals 2.2363860002236e1. x1 and x0 are elements in srcTensor1 and srcTensor0.
Prototype
- Pass the temporary space through the sharedTmpBuffer input parameter.
- All or part of the source operand tensors are involved in computation.
1 2
template <typename T, bool isReuseSource = false> __aicore__ inline void GeGLU(const LocalTensor<T> &dstTensor, const LocalTensor<T> &srcTensor0, const LocalTensor<T> &srcTensor1, const LocalTensor<uint8_t> &sharedTmpBuffer, uint32_t calCount)
- All source operand tensors are involved in computation.
1 2
template <typename T, bool isReuseSource = false> __aicore__ inline void GeGLU(const LocalTensor<T> &dstTensor, const LocalTensor<T> &srcTensor0, const LocalTensor<T> &srcTensor1, const LocalTensor<uint8_t> &sharedTmpBuffer)
- All or part of the source operand tensors are involved in computation.
- Allocate the temporary space through the API framework.
- All or part of the source operand tensors are involved in computation.
1 2
template <typename T, bool isReuseSource = false> __aicore__ inline void GeGLU(const LocalTensor<T> &dstTensor, const LocalTensor<T> &srcTensor0, const LocalTensor<T> &srcTensor1, uint32_t calCount)
- All source operand tensors are involved in computation.
1 2
template <typename T, bool isReuseSource = false> __aicore__ inline void GeGLU(const LocalTensor<T> &dstTensor, const LocalTensor<T> &srcTensor0, const LocalTensor<T> &srcTensor1)
- All or part of the source operand tensors are involved in computation.
Due to the complex mathematical computation involved in the internal implementation of this API, additional temporary space is required to store intermediate variables generated during computation. The temporary space can be passed by developers through the sharedTmpBuffer input parameter or allocated through the API framework.
- When the sharedTmpBuffer input parameter is used for passing the temporary space, the tensor serves as the temporary space. In this case, the API framework is not required for temporary space allocation. This enables developers to manage the sharedTmpBuffer space and reuse the buffer after calling the API, so that the buffer is not repeatedly allocated and deallocated, improving the flexibility and buffer utilization.
- When the API framework is used for temporary space allocation, developers do not need to allocate the space, but must reserve the required size for the space.
If sharedTmpBuffer is used, developers must allocate space for the tensor. If the API framework is used, developers must reserve the temporary space. To obtain the size of the temporary space (BufferSize) to be reserved, use the API provided in GetGeGLUMaxMinTmpSize.
Parameters
Parameter |
Description |
|---|---|
T |
Data type of the operand. |
isReuseSource |
Whether the source operand can be modified. This parameter is reserved. Pass the default value false. |
Parameter |
Input/Output |
Description |
|---|---|---|
dstTensor |
Output |
Destination operand. The type is LocalTensor, and the supported TPosition is VECIN, VECCALC, or VECOUT. |
srcTensor0/ srcTensor1 |
Input |
Source operand. The source operand must have the same data type as the destination operand. The type is LocalTensor, and the supported TPosition is VECIN, VECCALC, or VECOUT. |
calCount |
Input |
Number of input data elements. The value range is [0, min(srcTensor0.GetSize(),srcTensor1.GetSize(),dstTensor.GetSize)]. |
sharedTmpBuffer |
Input |
Temporary buffer. The type is LocalTensor, and the supported TPosition is VECIN, VECCALC, or VECOUT. This parameter is used to store intermediate variables during complex computation in GeGLU and is provided by developers. For details about how to obtain the temporary space size (BufferSize), see GetGeGLUMaxMinTmpSize. |
Returns
None
Availability
Constraints
- For details about the alignment requirements of the operand address offset, see General Restrictions.
- The source operand address must not overlap the destination operand address.
- Currently, only the ND format is supported.
Example
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | #include "kernel_operator.h" template <typename srcType> class KernelGeGLU { public: __aicore__ inline KernelGeGLU() {} __aicore__ inline void Init(GM_ADDR src0Gm, GM_ADDR src1Gm, GM_ADDR dstGm, uint32_t inputSize, uint32_t tmpBufSize) { dataSize = inputSize; uint32_t bufSize = 4 * tmpBufSize; src0Global.SetGlobalBuffer(reinterpret_cast<__gm__ srcType *>(src0Gm), dataSize); src1Global.SetGlobalBuffer(reinterpret_cast<__gm__ srcType *>(src1Gm), dataSize); dstGlobal.SetGlobalBuffer(reinterpret_cast<__gm__ srcType *>(dstGm), dataSize); pipe.InitBuffer(inQueue0, 1, dataSize * sizeof(srcType)); pipe.InitBuffer(inQueue1, 1, dataSize * sizeof(srcType)); pipe.InitBuffer(outQueue, 1, dataSize * sizeof(srcType)); if ((sizeof(srcType) == sizeof(half)) && (tmpBufSize > 0)) { pipe.InitBuffer(buf, bufSize * sizeof(srcType)); } } __aicore__ inline void Process(uint32_t tmpBufSize, uint32_t calCount) { CopyIn(); Compute(tmpBufSize, calCount); CopyOut(); } private: __aicore__ inline void CopyIn() { AscendC::LocalTensor<srcType> src0Local = inQueue0.AllocTensor<srcType>(); AscendC::LocalTensor<srcType> src1Local = inQueue1.AllocTensor<srcType>(); AscendC::DataCopy(src0Local, src0Global, dataSize); AscendC::DataCopy(src1Local, src1Global, dataSize); inQueue0.EnQue(src0Local); inQueue1.EnQue(src1Local); } __aicore__ inline void Compute(uint32_t tmpBufSize, uint32_t calCount) { AscendC::LocalTensor<srcType> dstLocal = outQueue.AllocTensor<srcType>(); AscendC::LocalTensor<srcType> src0Local = inQueue0.DeQue<srcType>(); AscendC::LocalTensor<srcType> src1Local = inQueue1.DeQue<srcType>(); AscendC::LocalTensor<uint8_t> temp; if ((sizeof(srcType) == sizeof(half)) && (tmpBufSize > 0)) { temp = buf.Get<uint8_t>(); } if ((tmpBufSize > 0) && (calCount > 0)) { AscendC::GeGLU<srcType, false>(dstLocal, src0Local, src1Local, temp, calCount); } else if (tmpBufSize > 0) { AscendC::GeGLU<srcType, false>(dstLocal, src0Local, src1Local, temp); } else if (calCount > 0) { AscendC::GeGLU<srcType, false>(dstLocal, src0Local, src1Local, calCount); } else { AscendC::GeGLU<srcType, false>(dstLocal, src0Local, src1Local); } outQueue.EnQue<srcType>(dstLocal); inQueue0.FreeTensor(src0Local); inQueue1.FreeTensor(src1Local); } __aicore__ inline void CopyOut() { AscendC::LocalTensor<srcType> dstLocal = outQueue.DeQue<srcType>(); AscendC::DataCopy(dstGlobal, dstLocal, dataSize); outQueue.FreeTensor(dstLocal); } private: AscendC::GlobalTensor<srcType> src0Global; AscendC::GlobalTensor<srcType> src1Global; AscendC::GlobalTensor<srcType> dstGlobal; AscendC::TPipe pipe; AscendC::TQue<AscendC::QuePosition::VECIN, 1> inQueue0; AscendC::TQue<AscendC::QuePosition::VECIN, 1> inQueue1; AscendC::TQue<AscendC::QuePosition::VECOUT, 1> outQueue; AscendC::TBuf<AscendC::TPosition::VECCALC> buf; uint32_t dataSize = 0; }; template <typename dataType> __aicore__ void kernel_geglu_operator(GM_ADDR src0Gm, GM_ADDR src1Gm, GM_ADDR dstGm, uint32_t srcSize, uint32_t tmpBufSize, uint32_t calCount) { KernelGeGLU<dataType> op; op.Init(src0Gm, src1Gm, dstGm, srcSize, tmpBufSize); op.Process(tmpBufSize, calCount); } |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | Input data (srcTensor0): [ 1.6025391 3.4765625 3.4316406 3.7539062 -1.3330078 0.72314453 -3.0078125 0.85498047 -1.3691406 2.6894531 -2.9101562 -3.6992188 -2.2734375 -2.859375 2.5683594 -1.7802734 ] Input data (srcTensor1) [-0.6015625 1.9589844 1.9257812 3.8769531 0.5878906 2.9179688 -1.8847656 3.2304688 2.8945312 2.4550781 1.3730469 -1.9248047 0.7919922 -2.5332031 -2.1425781 -2.9433594] Output data (dstLocal): [-0.263916015625000000 6.640625000000000000 6.429687500000000000 14.554687500000000000 -0.565429687500000000 2.107421875000000000 0.168579101562500000 2.759765625000000000 -3.957031250000000000 6.558593750000000000 -3.656250000000000000 0.192993164062500000 -1.415039062500000000 0.039642333984375000 -0.087890625000000000 0.007740020751953125] |