Power

Function Description

Computes exponentiation element-wise. Three types of APIs are provided. Their processing logic is as follows:

Prototype

  • Power(dstTensor, srcTensor1, srcTensor2)
    • Pass the temporary space through the sharedTmpBuffer input parameter.
      • All or part of the source operand tensors are involved in computation.
        1
        2
        template <typename T, bool isReuseSource = false>
        __aicore__ inline void Power(const LocalTensor<T>& dstTensor, const LocalTensor<T>& src0Tensor, const LocalTensor<T>& src1Tensor, const LocalTensor<uint8_t>& sharedTmpBuffer, uint32_t calCount)
        
      • All source operand tensors are involved in computation.
        1
        2
        template <typename T, bool isReuseSource = false>
        __aicore__ inline void Power(const LocalTensor<T>& dstTensor, const LocalTensor<T>& src0Tensor, const LocalTensor<T>& src1Tensor, const LocalTensor<uint8_t>& sharedTmpBuffer)
        
    • Allocate the temporary space through the API framework.
      • All or part of the source operand tensors are involved in computation.
        1
        2
        template <typename T, bool isReuseSource = false>
        __aicore__ inline void Power(const LocalTensor<T>& dstTensor, const LocalTensor<T>& src0Tensor, const LocalTensor<T>& src1Tensor, uint32_t calCount)
        
      • All source operand tensors are involved in computation.
        1
        2
        template <typename T, bool isReuseSource = false>
        __aicore__ inline void Power(const LocalTensor<T>& dstTensor, const LocalTensor<T>& src0Tensor, const LocalTensor<T>& src1Tensor)
        
  • Power(dstTensor, srcTensor1, scalarValue)
    • Pass the temporary space through the sharedTmpBuffer input parameter.
      • All or part of the source operand tensors are involved in computation.
        1
        2
        template <typename T, bool isReuseSource = false>
        __aicore__ inline void Power(const LocalTensor<T>& dstTensor, const LocalTensor<T>& src0Tensor, const T& src1Scalar, const LocalTensor<uint8_t>& sharedTmpBuffer, uint32_t calCount)
        
      • All source operand tensors are involved in computation.
        1
        2
        template <typename T, bool isReuseSource = false>
        __aicore__ inline void Power(const LocalTensor<T>& dstTensor, const LocalTensor<T>& src0Tensor, const T& src1Scalar, const LocalTensor<uint8_t>& sharedTmpBuffer)
        
    • Allocate the temporary space through the API framework.
      • All or part of the source operand tensors are involved in computation.
        1
        2
        template <typename T, bool isReuseSource = false>
        __aicore__ inline void Power(const LocalTensor<T>& dstTensor, const LocalTensor<T>& src0Tensor, const T& src1Scalar, uint32_t calCount)
        
      • All source operand tensors are involved in computation.
        1
        2
        template <typename T, bool isReuseSource = false>
        __aicore__ inline void Power(const LocalTensor<T>& dstTensor, const LocalTensor<T>& src0Tensor, const T& src1Scalar)
        
  • Power(dstTensor, scalarValue, srcTensor2)
    • Pass the temporary space through the sharedTmpBuffer input parameter.
      • All or part of the source operand tensors are involved in computation.
        1
        2
        template <typename T, bool isReuseSource = false>
        __aicore__ inline void Power(const LocalTensor<T>& dstTensor, const T& src0Scalar, const LocalTensor<T>& src1Tensor, const LocalTensor<uint8_t>& sharedTmpBuffer, uint32_t calCount)
        
      • All source operand tensors are involved in computation.
        1
        2
        template <typename T, bool isReuseSource = false>
        __aicore__ inline void Power(const LocalTensor<T>& dstTensor, const T& src0Scalar, const LocalTensor<T>& src1Tensor, const LocalTensor<uint8_t>& sharedTmpBuffer)
        
    • Allocate the temporary space through the API framework.
      • All or part of the source operand tensors are involved in computation.
        1
        2
        template <typename T, bool isReuseSource = false>
        __aicore__ inline void Power(const LocalTensor<T>& dstTensor, const T& src0Scalar, const LocalTensor<T>& src1Tensor, uint32_t calCount)
        
      • All source operand tensors are involved in computation.
        1
        2
        template <typename T, bool isReuseSource = false>
        __aicore__ inline void Power(const LocalTensor<T>& dstTensor, const T& src0Scalar, const LocalTensor<T>& src1Tensor)
        

Due to the complex mathematical computation involved in the internal implementation of this API, additional temporary space is required to store intermediate variables generated during computation. The temporary space can be allocated through the API framework or passed by developers through the sharedTmpBuffer input parameter.

  • When the API framework is used for temporary space allocation, developers do not need to allocate the space, but must reserve the required size for the space.
  • When the sharedTmpBuffer input parameter is used for passing the temporary space, the tensor serves as the temporary space. In this case, the API framework is not required for temporary space allocation. This enables developers to manage the sharedTmpBuffer space and reuse the buffer after calling the API, so that the buffer is not repeatedly allocated and deallocated, improving the flexibility and buffer utilization.

If the API framework is used, developers must reserve the temporary space. If sharedTmpBuffer is used, developers must allocate space for sharedTmpBuffer. To obtain the size of the temporary space (BufferSize) to be reserved, use the API provided in GetPowerMaxMinTmpSize.

Parameters

Table 1 Parameters in the template

Parameter

Description

T

Data type of the operand.

isReuseSource

Whether the source operand can be modified. This parameter is reserved. Pass the default value false.

Table 2 API parameters

Parameter

Input/Output

Description

dstTensor

Output

Destination operand.

The type is LocalTensor, and the supported TPosition is VECIN, VECCALC, or VECOUT.

src0Tensor

Input

Source operand.

The type is LocalTensor, and the supported TPosition is VECIN, VECCALC, or VECOUT.

The source operand must have the same data type as the destination operand.

src1Tensor

Input

Source operand.

The type is LocalTensor, and the supported TPosition is VECIN, VECCALC, or VECOUT.

The source operand must have the same data type as the destination operand.

src0Scalar/src1Scalar

Input

Source operand of the Scalar type. The source operand must have the same data type as the destination operand.

sharedTmpBuffer

Input

Temporary memory space.

The type is LocalTensor, and the supported TPosition is VECIN, VECCALC, or VECOUT.

For details about how to obtain the temporary space size for the three power APIs with different input data types, see GetPowerMaxMinTmpSize.

Returns

None

Availability

Constraints

  • The source operand address must not overlap the destination operand address.
  • For details about the alignment requirements of the operand address offset, see General Restrictions.

Example

This example shows only part of the code used in the computation process. If you need to run the sample code, copy the code segment and replace some code of the Compute function in Template Sample.

  • Power(dstTensor, srcTensor1, srcTensor2)
    1
    Power(dstLocal, srcLocal1, srcLocal2)
    
    Result example:
    1
    2
    3
    Input data (srcLocal1): [1.4608411 4.344736 ... 0.46437776]
    Input data (srcLocal2): [-5.4534287 4.5122147 ... -0.9344089]
    Output data (dstLocal): [0.12657544 756.1846 ... 2.0477564]
    
  • Power(dstTensor, srcTensor1, scalarValue)
    1
    Power(dstLocal, srcLocal1, scalarValue)
    
    Result example:
    1
    2
    3
    Input data (srcLocal1): [2.263972 2.902264 ... 0.40299487]
    Input data (scalarValue): 1.2260373
    Output data (dstLocal): [2.7232351 3.6926038 ... 0.32815763]
    
  • Power(dstTensor, scalarValue, srcTensor2)
    1
    Power(dstLocal, scalarValue, srcLocal2)
    
    Result example:
    1
    2
    3
    Input data (scalarValue): 4.382112
    Input data (srcLocal2): [5.504859 2.0677629 ... 1.053188]
    Output data (dstLocal): [3407.0386 21.225077 ... 4.7403817]
    

Template Sample

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
#include "kernel_operator.h"
template <typename srcType>
class KernelPower
{
public:
    __aicore__ inline KernelPower() {}
    __aicore__ inline void Init(GM_ADDR src1Gm, GM_ADDR src2Gm, GM_ADDR dstGm, uint32_t srcSize)
    {
        src1Global.SetGlobalBuffer(reinterpret_cast<__gm__ srcType *>(src1Gm), srcSize);
        src2Global.SetGlobalBuffer(reinterpret_cast<__gm__ srcType *>(src2Gm), srcSize);
        dstGlobal.SetGlobalBuffer(reinterpret_cast<__gm__ srcType *>(dstGm), srcSize);
        pipe.InitBuffer(inQueueX1, 1, srcSize * sizeof(srcType));
        pipe.InitBuffer(inQueueX2, 1, srcSize * sizeof(srcType));
        pipe.InitBuffer(outQueue, 1, srcSize * sizeof(srcType));
        bufferSize = srcSize;
    }
    __aicore__ inline void Process()
    {
        CopyIn();
        Compute();
        CopyOut();
    }

private:
    __aicore__ inline void CopyIn()
    {
        AscendC::LocalTensor<srcType> srcLocal1 = inQueueX1.AllocTensor<srcType>();
        AscendC::DataCopy(srcLocal1, src1Global, bufferSize);
        inQueueX1.EnQue(srcLocal1);
        AscendC::LocalTensor<srcType> srcLocal2 = inQueueX2.AllocTensor<srcType>();
        AscendC::DataCopy(srcLocal2, src2Global, bufferSize);
        inQueueX2.EnQue(srcLocal2);
    }
    __aicore__ inline void Compute()
    {
        AscendC::LocalTensor<srcType> dstLocal = outQueue.AllocTensor<srcType>();
        AscendC::LocalTensor<srcType> srcLocal1 = inQueueX1.DeQue<srcType>();
        AscendC::LocalTensor<srcType> srcLocal2 = inQueueX2.DeQue<srcType>();
        AscendC::LocalTensor<srcType> tmpLocal;
        srcType scalarValue1 = srcLocal1.GetValue(0);
        srcType scalarValue2 = srcLocal2.GetValue(0);
        AscendC::Power<srcType, false>(dstLocal, scalarValue1, srcLocal2);
        outQueue.EnQue<srcType>(dstLocal);
        inQueueX1.FreeTensor(srcLocal1);
        inQueueX2.FreeTensor(srcLocal2);
    }
    __aicore__ inline void CopyOut()
    {
        AscendC::LocalTensor<srcType> dstLocal = outQueue.DeQue<srcType>();
        AscendC::DataCopy(dstGlobal, dstLocal, bufferSize);
        outQueue.FreeTensor(dstLocal);
    }

private:
    AscendC::GlobalTensor<srcType> src1Global;
    AscendC::GlobalTensor<srcType> src2Global;
    AscendC::GlobalTensor<srcType> dstGlobal;
    AscendC::TPipe pipe;
    AscendC::TQue<AscendC::QuePosition::VECIN, 1> inQueueX1;
    AscendC::TQue<AscendC::QuePosition::VECIN, 1> inQueueX2;
    AscendC::TQue<AscendC::QuePosition::VECOUT, 1> outQueue;
    uint32_t bufferSize = 0;
};
template <typename dataType>
__aicore__ void kernel_power_operator(GM_ADDR src1Gm, GM_ADDR src2Gm, GM_ADDR dstGm, uint32_t srcSize)
{
    KernelPower<dataType> op;
    op.Init(src1Gm, src2Gm, dstGm, srcSize);
    op.Process();
}

extern "C" __global__ __aicore__ void power_operator_custom(GM_ADDR src1Gm, GM_ADDR src2Gm, GM_ADDR dstGm, uint32_t srcSize)
{
    kernel_power_operator<half>(src1Gm, src2Gm, dstGm, srcSize);
}