Tan
Function Description
Computes tangent element-wise using the following formula, where PAR indicates the number of elements that can be processed by a vector unit in one iteration.


Prototype
- Pass the temporary space through the sharedTmpBuffer input parameter.
- All or part of the source operand tensors are involved in computation.
1 2
template <typename T, bool isReuseSource = false> __aicore__ inline void Tan(const LocalTensor<T>& dstTensor, const LocalTensor<T>& srcTensor, const LocalTensor<uint8_t>& sharedTmpBuffer, const uint32_t calCount)
- All source operand tensors are involved in computation.
1 2
template <typename T, bool isReuseSource = false> __aicore__ inline void Tan(const LocalTensor<T>& dstTensor, const LocalTensor<T>& srcTensor, const LocalTensor<uint8_t>& sharedTmpBuffer)
- All or part of the source operand tensors are involved in computation.
- Allocate the temporary space through the API framework.
- All or part of the source operand tensors are involved in computation.
1 2
template <typename T, bool isReuseSource = false> __aicore__ inline void Tan(const LocalTensor<T>& dstTensor, const LocalTensor<T>& srcTensor, const uint32_t calCount)
- All source operand tensors are involved in computation.
1 2
template <typename T, bool isReuseSource = false> __aicore__ inline void Tan(const LocalTensor<T>& dstTensor, const LocalTensor<T>& srcTensor)
- All or part of the source operand tensors are involved in computation.
Due to the complex mathematical computation involved in the internal implementation of this API, additional temporary space is required to store intermediate variables generated during computation. The temporary space can be allocated through the API framework or passed by developers through the sharedTmpBuffer input parameter.
- When the API framework is used for temporary space allocation, developers do not need to allocate the space, but must reserve the required size for the space.
- When the sharedTmpBuffer input parameter is used for passing the temporary space, the tensor serves as the temporary space. In this case, the API framework is not required for temporary space allocation. This enables developers to manage the sharedTmpBuffer space and reuse the buffer after calling the API, so that the buffer is not repeatedly allocated and deallocated, improving the flexibility and buffer utilization.
If the API framework is used, developers must reserve the temporary space. If sharedTmpBuffer is used, developers must allocate space for sharedTmpBuffer. To obtain the size of the temporary space (BufferSize) to be reserved, use the API provided in GetTanMaxMinTmpSize.
Parameters
Parameter |
Description |
|---|---|
T |
Data type of the operand. |
isReuseSource |
Whether the source operand can be modified. This parameter is reserved. Pass the default value false. |
Parameter |
Input/Output |
Description |
|---|---|---|
dstTensor |
Output |
Destination operand. The type is LocalTensor, and the supported TPosition is VECIN, VECCALC, or VECOUT. |
srcTensor |
Input |
Source operand. The type is LocalTensor, and the supported TPosition is VECIN, VECCALC, or VECOUT. The source operand must have the same data type as the destination operand. |
sharedTmpBuffer |
Input |
Temporary buffer. The type is LocalTensor, and the supported TPosition is VECIN, VECCALC, or VECOUT. For details about how to obtain the temporary space size (BufferSize), see GetTanMaxMinTmpSize. |
calCount |
Input |
Number of actually computed data elements. The value range is (0, srcTensor.GetSize()]. |
Returns
None
Availability
Constraints
- The input data must be within the range of (-65504.0, 65504.0).
- The source operand address must not overlap the destination operand address.
- sharedTmpBuffer must not overlap the addresses of the source operand and destination operand.
- For details about the alignment requirements of the operand address offset, see General Restrictions.
Example
1 2 3 4 5 6 | AscendC::TPipe pipe; AscendC::TQue<AscendC::TPosition::VECCALC, 1> tmpQue; pipe.InitBuffer(tmpQue, 1, bufferSize); // bufferSize is obtained through the tiling parameter on the host. AscendC::LocalTensor<uint8_t> sharedTmpBuffer = tmpQue.AllocTensor<uint8_t>(); // The input shape is 1024, the input data type of the operator is half, and the number of actually computed data elements is 512. AscendC::Tan(dstLocal, srcLocal, sharedTmpBuffer, 512); |
1 2 | Input data (srcLocal): [-0.11241488 0.80886058 4.07060815 ... -3.90772673 60.49020877] Output data (dstLocal): [-0.11289082 1.04806128 1.33812286 ... -0.96219541 1.02953215] |