Digamma
Function Description
Computes the logarithmic derivative of the gamma function of x element-wise by using the following formula, where PAR represents the number of elements that can be processed by the Vector Unit in one iteration.
represents a gamma function.


Prototype
- Pass the temporary space through the sharedTmpBuffer input parameter.
1 2
template <typename T, bool isReuseSource = false> __aicore__ inline void Digamma(LocalTensor<T> &dstTensor, const LocalTensor<T> &srcTensor, LocalTensor<uint8_t> &sharedTmpBuffer, const uint32_t calCount)
- Allocate the temporary space through the API framework.
1 2
template<typename T, bool isReuseSource = false> __aicore__ inline void Digamma(LocalTensor<T>& dstTensor, const LocalTensor<T>& srcTensor, const uint32_t calCount)
Due to the complex mathematical computation involved in the internal implementation of this API, additional temporary space is required to store intermediate variables generated during computation. The temporary space can be passed by developers through the sharedTmpBuffer input parameter or allocated through the API framework.
- When the sharedTmpBuffer input parameter is used for passing the temporary space, the tensor serves as the temporary space. In this case, the API framework is not required for temporary space allocation. This enables developers to manage the sharedTmpBuffer space and reuse the buffer after calling the API, so that the buffer is not repeatedly allocated and deallocated, improving the flexibility and buffer utilization.
- When the API framework is used for temporary space allocation, developers do not need to allocate the space, but must reserve the required size for the space.
If sharedTmpBuffer is used, developers must allocate space for the tensor. If the API framework is used, developers must reserve the temporary space. To obtain the size of the temporary space (BufferSize) to be reserved, use the API provided in GetDigammaMaxMinTmpSize.
Parameters
|
Parameter |
Description |
|---|---|
|
T |
Data type of the operand. |
|
isReuseSource |
Whether the source operand can be modified. The default value is false. If you are allowed to modify the source operand, enable this parameter, to save some memory space. If this parameter is set to true, the srcTensor memory space is reused during internal computation of this API to save the memory space. If this parameter is set to false, the srcTensor memory space is not reused during internal computation of this API. This parameter can be enabled for float input data but cannot be enabled for half input data. For details about how to use isReuseSource, see More Examples. |
|
Parameter |
Input/Output |
Description |
|---|---|---|
|
dstTensor |
Output |
Destination operand. The type is LocalTensor, and the supported TPosition is VECIN, VECCALC, or VECOUT. |
|
srcTensor |
Input |
Source operand. The type is LocalTensor, and the supported TPosition is VECIN, VECCALC, or VECOUT. The source operand must have the same data type as the destination operand. |
|
sharedTmpBuffer |
Input |
Temporary buffer. The type is LocalTensor, and the supported TPosition is VECIN, VECCALC, or VECOUT. This parameter is used to store intermediate variables during complex computation in Digamma and is provided by developers. For details about how to obtain the temporary space size (BufferSize), see GetDigammaMaxMinTmpSize. |
|
calCount |
Input |
Number of actually computed data elements. The value range is [0, srcTensor.GetSize()]. |
Returns
None
Availability
Constraints
- The source operand address must not overlap the destination operand address.
- sharedTmpBuffer must not overlap the addresses of the source operand and destination operand.
- For details about the alignment requirements of the operand address offset, see General Restrictions.
Example
For details about the complete call example, see More Examples.
1 2 3 4 5 6 |
AscendC::TPipe pipe; AscendC::TQue<AscendC::TPosition::VECCALC, 1> tmpQue; pipe.InitBuffer(tmpQue, 1, bufferSize); // bufferSize is obtained through the tiling parameter on the host. AscendC::LocalTensor<uint8_t> sharedTmpBuffer = tmpQue.AllocTensor<uint8_t>(); // The input shape is 1024, the input data type of the operator is float, and the actually computed data elements is 1024. AscendC::Digamma<float, false>(dstLocal, srcLocal, sharedTmpBuffer, 1024); |
Input data (srcLocal): [5.3675685 0.26528683 -2.872628 ... 2.9387941 9.001339] Output data (dstLocal): [1.5843406 -3.978809 -6.2081366 ... 0.8983184 2.1407988]