LayerNormGradBeta
Function Usage
Obtains the reverse beta/gmma value and outputs pdx, gmma, and beta when used in conjunction with LayerNormGrad.
The formulas are as follows:


Prototype
Due to the complex computation involved in the internal implementation of this API, additional temporary space is required to store intermediate variables generated during computation. The method of obtaining the temporary space size (BufferSize) is as follows: Obtain the required maximum and minimum temporary space sizes using the GetLayerNormGradBetaMaxMinTmpSize API provided in LayerNormGradBeta Tiling. The minimum space can ensure correct functionality, while the maximum space is used to improve performance.
The temporary space can be allocated through the API framework or passed by developers through the sharedTmpBuffer input parameter. Therefore, there are two types of function prototypes for the LayerNormGradBeta API.
- Pass the temporary space through the sharedTmpBuffer input parameter.
1 2
template <typename T, bool isReuseSource = false> __aicore__ inline void LayerNormGradBeta(const LocalTensor<T> &outputPdGamma, const LocalTensor<T> &outputPdBeta, const LocalTensor<T> &resForGamma, const LocalTensor<T> &inputDy, const LocalTensor<uint8_t> &sharedTmpBuffer, const LayerNormGradBetaTiling &tiling)
This method enables developers to allocate and manage the temporary buffer space on their own, and reuse the buffer after calling the API, so that the buffer is not repeatedly allocated and deallocated, improving the flexibility and buffer utilization.
- Allocate the temporary space through the API framework.
1 2
template <typename T, bool isReuseSource = false> __aicore__ inline void LayerNormGradBeta(const LocalTensor<T> &outputPdGamma, const LocalTensor<T> &outputPdBeta, const LocalTensor<T> &resForGamma, const LocalTensor<T> &inputDy, LayerNormGradBetaTiling &tiling)
When using this method, developers do not need to allocate the space, but must reserve the required size for the space.
Parameters
Parameter |
Description |
|---|---|
T |
Data type of the operand. |
isReuseSource |
Whether the source operand can be modified. The default value is false. If you are allowed to modify the source operand, enable this parameter, to save some buffer space. If this parameter is set to true, the inputDy memory space is reused during internal computation of this API to save the memory space. If this parameter is set to false, the inputDymemory space is not reused. This parameter can be enabled for float input data but cannot be enabled for half input data. For details about how to use isReuseSource, see More Examples. |
Parameter |
Input/Output |
Meaning |
|---|---|---|
outputPdGamma |
Output |
Destination operand, with a type of LocalTensor and a shape of [H]. For details about the definition of the LocalTensor data structure, see LocalTensor. The last axis length must be 32-byte aligned. |
outputPdBeta |
Output |
Destination operand, with a type of LocalTensor and a shape of [H]. For details about the definition of the LocalTensor data structure, see LocalTensor. The last axis length must be 32-byte aligned. |
resForGamma |
Input |
Source operand, with a type of LocalTensor and a shape of [B, S, H]. For details about the definition of the LocalTensor data structure, see LocalTensor. The data type of resForGamma must be the same as that of the destination operand, and the last axis length must be 32-byte aligned. The LayerNormGrad API needs to be called in advance to obtain the value of resForGamma. |
inputDy |
Input |
Source operand, with a type of LocalTensor and a shape of [B, S, H]. For details about the definition of the LocalTensor data structure, see LocalTensor. The data type of inputDy must be the same as that of the destination operand, and the last axis length must be 32-byte aligned. |
sharedTmpBuffer |
Input |
Shared buffer, which is used to store temporary data generated during internal API computation. This enables developers to manage the sharedTmpBuffer space and reuse the buffer after calling the API, so that the buffer is not repeatedly allocated and deallocated, improving the flexibility and buffer utilization. The type is LocalTensor, and the supported TPosition is VECIN, VECCALC, or VECOUT. |
tiling |
Input |
Tiling information required for LayerNormGradBeta computation. For details about how to obtain the tiling information, see LayerNormGradBeta Tiling. |
isReuseSource |
Input |
Whether the source operand can be modified. The default value is false. If you are allowed to modify the source operand, enable this parameter, to save some buffer space. If this parameter is set to true, the inputDy buffer space is reused during internal computation of this API to save the buffer space. If this parameter is set to false, the inputDy buffer space is not reused, and extra temporary buffer is allocated during internal computation of this API. After this API is called, the temporary buffer is automatically deallocated. This parameter can be enabled for float input data but cannot be enabled for half input data. |
Returns
None
Availability
Constraints
- For details about the alignment requirements of the operand address offset, see General Restrictions.
- The tensor space of src and dst can be reused.
- The input shape must be in ND format.
- If the input data does not meet the alignment requirements, developers need to pad the data. The padded data should be set to 0 to prevent abnormal values from affecting network computation.
- The last axis (H axis) cannot be split.
Example
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 | #include "kernel_operator.h" template <typename T, bool isReuseSource = false> class KernelLayernormGradBeta { public: __aicore__ inline KernelLayernormGradBeta() {} __aicore__ inline void Init(__gm__ uint8_t *resForGammaGm, __gm__ uint8_t *inputDyGm, __gm__ uint8_t *outputPdGammaGm, __gm__ uint8_t *outputPdBetaGm, const LayerNormGradBetaTiling &tiling) { this->bLength = tiling.bLength; this->sLength = tiling.sLength; this->hLength = tiling.hLength; this->tiling = tiling; bshLength = bLength * sLength * hLength; bsLength = bLength * sLength; resForGammaGlobal.SetGlobalBuffer(reinterpret_cast<__gm__ T *>(resForGammaGm), bshLength); inputDyGlobal.SetGlobalBuffer(reinterpret_cast<__gm__ T *>(inputDyGm), bshLength); outputPdGammaGlobal.SetGlobalBuffer(reinterpret_cast<__gm__ T *>(outputPdGammaGm), hLength); outputPdBetaGlobal.SetGlobalBuffer(reinterpret_cast<__gm__ T *>(outputPdBetaGm), hLength); pipe.InitBuffer(inQueueResForGamma, 1, sizeof(T) * bshLength); pipe.InitBuffer(inQueueDy, 1, sizeof(T) * bshLength); pipe.InitBuffer(outQueuePdGamma, 1, sizeof(T) * hLength); pipe.InitBuffer(outQueuePdBeta, 1, sizeof(T) * hLength); } __aicore__ inline void Process() { CopyIn(); Compute(); CopyOut(); } private: __aicore__ inline void CopyIn() { AscendC::LocalTensor<T> resForGammaLocal = inQueueResForGamma.AllocTensor<T>(); AscendC::LocalTensor<T> inputDyLocal = inQueueDy.AllocTensor<T>(); AscendC::DataCopy(resForGammaLocal, resForGammaGlobal, bshLength); AscendC::DataCopy(inputDyLocal, inputDyGlobal, bshLength); inQueueResForGamma.EnQue(resForGammaLocal); inQueueDy.EnQue(inputDyLocal); } __aicore__ inline void Compute() { AscendC::LocalTensor<T> resForGammaLocal = inQueueResForGamma.DeQue<T>(); AscendC::LocalTensor<T> inputDyLocal = inQueueDy.DeQue<T>(); AscendC::LocalTensor<T> outputPdGammaLocal = outQueuePdGamma.AllocTensor<T>(); AscendC::LocalTensor<T> outputPdBetaLocal = outQueuePdBeta.AllocTensor<T>(); AscendC::LayerNormGradBeta<T, isReuseSource>( outputPdGammaLocal, outputPdBetaLocal, resForGammaLocal, inputDyLocal, tiling); outQueuePdGamma.EnQue<T>(outputPdGammaLocal); outQueuePdBeta.EnQue<T>(outputPdBetaLocal); inQueueResForGamma.FreeTensor(resForGammaLocal); inQueueDy.FreeTensor(inputDyLocal); } __aicore__ inline void CopyOut() { AscendC::LocalTensor<T> outputPdGammaLocal = outQueuePdGamma.DeQue<T>(); AscendC::LocalTensor<T> outputPdBetaLocal = outQueuePdBeta.DeQue<T>(); AscendC::DataCopy(outputPdGammaGlobal, outputPdGammaLocal, hLength); AscendC::DataCopy(outputPdBetaGlobal, outputPdBetaLocal, hLength); outQueuePdGamma.FreeTensor(outputPdGammaLocal); outQueuePdBeta.FreeTensor(outputPdBetaLocal); } private: AscendC::TPipe pipe; AscendC::TQue<AscendC::QuePosition::VECIN, 1> inQueueResForGamma, inQueueDy; AscendC::TQue<AscendC::QuePosition::VECOUT, 1> outQueuePdGamma, outQueuePdBeta; AscendC::GlobalTensor<T> resForGammaGlobal; AscendC::GlobalTensor<T> inputDyGlobal; AscendC::GlobalTensor<T> outputPdGammaGlobal; AscendC::GlobalTensor<T> outputPdBetaGlobal; uint32_t bLength; uint32_t sLength; uint32_t hLength; uint32_t bshLength; uint32_t bsLength; LayerNormGradBetaTiling tiling; }; extern "C" __global__ __aicore__ void kernel_layernorm_grad_beta_operator( GM_ADDR outputPdGammaGm, GM_ADDR outputPdBetaGm, GM_ADDR resForGammaGm, GM_ADDR inputDyGm, GM_ADDR tiling) { GET_TILING_DATA(tilingData, tiling); KernelLayernormGradBeta<half, false> op; op.Init(resForGammaGm, inputDyGm, outputPdGammaGm, outputPdBetaGm, tilingData.layerNormGradBetaTiling); op.Process(); } |