Log
Function Description
Computes natural logarithm of e, 2, and 10 element-wise using the following formula, where PAR indicates the number of elements that can be processed by a vector unit in one iteration.






Prototype
- e as the base:
- All or part of the source operand tensors are involved in computation.
1 2
template<typename T, bool isReuseSource = false> __aicore__ inline void Log(const LocalTensor<T>& dstTensor, const LocalTensor<T>& srcTensor, uint32_t calCount)
- All source operand tensors are involved in computation.
1 2
template<typename T, bool isReuseSource = false> __aicore__ inline void Log(const LocalTensor<T>& dstTensor, const LocalTensor<T>& srcTensor)
- 2 as the base:
- Pass the temporary space through the sharedTmpBuffer input parameter.
- All or part of the source operand tensors are involved in computation.
1 2
template<typename T, bool isReuseSource = false> __aicore__ inline void Log2(const LocalTensor<T>& dstTensor, const LocalTensor<T>& srcTensor, const LocalTensor<uint8_t>& sharedTmpBuffer, uint32_t calCount)
- All source operand tensors are involved in computation.
1 2
template <typename T, bool isReuseSource = false> __aicore__ inline void Log2(const LocalTensor<T>& dstTensor, const LocalTensor<T>& srcTensor, const LocalTensor<uint8_t>& sharedTmpBuffer)
- All or part of the source operand tensors are involved in computation.
- Allocate the temporary space through the API framework.
- All or part of the source operand tensors are involved in computation.
1 2
template<typename T, bool isReuseSource = false> __aicore__ inline void Log2(const LocalTensor<T>& dstTensor, const LocalTensor<T>& srcTensor, uint32_t calCount)
- All source operand tensors are involved in computation.
1 2
template <typename T, bool isReuseSource = false> __aicore__ inline void Log2(const LocalTensor<T>& dstTensor, const LocalTensor<T>& srcTensor)
- All or part of the source operand tensors are involved in computation.
- Pass the temporary space through the sharedTmpBuffer input parameter.
- 10 as the base:
- All or part of the source operand tensors are involved in computation.
1 2
template<typename T, bool isReuseSource = false> __aicore__ inline void Log10(const LocalTensor<T>& dstTensor, const LocalTensor<T>& srcTensor, uint32_t calCount)
- All source operand tensors are involved in computation.
1 2
template<typename T, bool isReuseSource = false> __aicore__ inline void Log10(const LocalTensor<T>& dstTensor, const LocalTensor<T>& srcTensor)
- All or part of the source operand tensors are involved in computation.
Due to the complex mathematical computation involved in the internal implementation of this API, additional temporary space is required to store intermediate variables generated during computation. The temporary space can be passed by developers through the sharedTmpBuffer input parameter or allocated through the API framework.
- When the sharedTmpBuffer input parameter is used for passing the temporary space, the tensor serves as the temporary space. In this case, the API framework is not required for temporary space allocation. This enables developers to manage the sharedTmpBuffer space and reuse the buffer after calling the API, so that the buffer is not repeatedly allocated and deallocated, improving the flexibility and buffer utilization.
- When the API framework is used for temporary space allocation, developers do not need to allocate the space, but must reserve the required size for the space.
If sharedTmpBuffer is used, developers must allocate space for the tensor. If the API framework is used, developers must reserve the temporary space. To obtain the size of the temporary space (BufferSize) to be reserved, use the API provided in GetLogMaxMinTmpSize.
Parameters
Parameter |
Description |
|---|---|
T |
Data type of the operand. |
isReuseSource |
Whether the source operand can be modified. This parameter is reserved. Pass the default value false. |
Parameter |
Input/Output |
Description |
|---|---|---|
dstTensor |
Output |
Destination operand. The type is LocalTensor, and the supported TPosition is VECIN, VECCALC, or VECOUT. |
srcTensor |
Input |
Source operand. The type is LocalTensor, and the supported TPosition is VECIN, VECCALC, or VECOUT. The source operand must have the same data type as the destination operand. |
sharedTmpBuffer |
Input |
Temporary buffer. The type is LocalTensor, and the supported TPosition is VECIN, VECCALC, or VECOUT. For details about how to obtain the temporary space size (BufferSize), see GetLogMaxMinTmpSize. |
calCount |
Input |
Number of actually computed data elements. The value range is [0, srcTensor.GetSize()]. |
Returns
None
Availability
Constraints
- The source operand address must not overlap the destination operand address.
- For details about the alignment requirements of the operand address offset, see General Restrictions.
Example
This example shows only part of the code used in the computation process. If you need to run the sample code, copy the code segment and replace some code of the Compute function in Template Sample.
- Log example
1Log(dstLocal, srcLocal);
Result example:1 2
Input data (srcLocal): [144.22607 9634.764 ... 1835.1245 3145.5125] Output data (dstLocal): [4.971382 9.173133 ... 7.514868 8.053732]
- Log2 example
1Log2(dstLocal, srcLocal);
Result example:1 2
Input data (srcLocal): [6299.54 338.45963 ... 2.853525 5752.1323] Output data (dstLocal): [12.621031 8.40284 ... 1.5127451 12.4898815]
- Log10 example
1Log10(dstLocal, srcLocal);
Result example:1 2
Input data (srcLocal): [712.7535 78.36265 ... 3099.0571 9313.082] Output data (dstLocal): [2.8529394 1.8941091 ... 3.4912295 3.9690933]
Template Sample
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 | #include "kernel_operator.h" template <typename srcType> class KernelLog { public: __aicore__ inline KernelLog() { } __aicore__ inline void Init(GM_ADDR srcGm, GM_ADDR dstGm, uint32_t srcSize) { src_global.SetGlobalBuffer(reinterpret_cast<__gm__ srcType *>(srcGm), srcSize); dst_global.SetGlobalBuffer(reinterpret_cast<__gm__ srcType *>(dstGm), srcSize); pipe.InitBuffer(inQueueX, 1, srcSize * sizeof(srcType)); pipe.InitBuffer(outQueue, 1, srcSize * sizeof(srcType)); bufferSize = srcSize; } __aicore__ inline void Process() { CopyIn(); Compute(); CopyOut(); } private: __aicore__ inline void CopyIn() { AscendC::LocalTensor<srcType> srcLocal = inQueueX.AllocTensor<srcType>(); AscendC::DataCopy(srcLocal, src_global, bufferSize); inQueueX.EnQue(srcLocal); } __aicore__ inline void Compute() { AscendC::LocalTensor<srcType> dstLocal = outQueue.AllocTensor<srcType>(); AscendC::LocalTensor<srcType> srcLocal = inQueueX.DeQue<srcType>(); AscendC::Log(dstLocal, srcLocal); // AscendC::Log10(dstLocal, srcLocal); // AscendC::Log2(dstLocal, srcLocal); outQueue.EnQue<srcType>(dstLocal); inQueueX.FreeTensor(srcLocal); } __aicore__ inline void CopyOut() { AscendC::LocalTensor<srcType> dstLocal = outQueue.DeQue<srcType>(); AscendC::DataCopy(dst_global, dstLocal, bufferSize); outQueue.FreeTensor(dstLocal); } private: AscendC::GlobalTensor<srcType> src_global; AscendC::GlobalTensor<srcType> dst_global; AscendC::TPipe pipe; AscendC::TQue<AscendC::QuePosition::VECIN, 1> inQueueX; AscendC::TQue<AscendC::QuePosition::VECOUT, 1> outQueue; uint32_t bufferSize = 0; }; template <typename dataType> __aicore__ void kernel_log_operator(GM_ADDR srcGm, GM_ADDR dstGm, uint32_t srcSize) { KernelLog<dataType> op; op.Init(srcGm, dstGm, srcSize); op.Process(); } extern "C" __global__ __aicore__ void log_operator_custom(GM_ADDR srcGm, GM_ADDR dstGm, uint32_t srcSize) { kernel_log_operator<half>(srcGm, dstGm, srcSize); // Input type and size } |