Asinh

Function Description

Computes inverse hyperbolic sine element-wise using the following formula, where PAR indicates the number of elements that can be processed by a vector unit in one iteration.

Prototype

  • Pass the temporary space through the sharedTmpBuffer input parameter.
    • All or part of the source operand tensors are involved in computation.
      1
      2
      template <typename T, bool isReuseSource = false>
      __aicore__ inline void Asinh(const LocalTensor<T>& dstTensor, const LocalTensor<T>& srcTensor, const LocalTensor<uint8_t>& sharedTmpBuffer, const uint32_t calCount)
      
    • All source operand tensors are involved in computation.
      1
      2
      template <typename T, bool isReuseSource = false>
      __aicore__ inline void Asinh(const LocalTensor<T>& dstTensor, const LocalTensor<T>& srcTensor, const LocalTensor<uint8_t>& sharedTmpBuffer)
      
  • Allocate the temporary space through the API framework.
    • All or part of the source operand tensors are involved in computation.
      1
      2
      template <typename T, bool isReuseSource = false>
      __aicore__ inline void Asinh(const LocalTensor<T> &dstTensor, const LocalTensor<T> &srcTensor, const uint32_t calCount)
      
    • All source operand tensors are involved in computation.
      1
      2
      template <typename T, bool isReuseSource = false>
      __aicore__ inline void Asinh(const LocalTensor<T> &dstTensor, const LocalTensor<T> &srcTensor)
      

Due to the complex mathematical computation involved in the internal implementation of this API, additional temporary space is required to store intermediate variables generated during computation. The temporary space can be passed by developers through the sharedTmpBuffer input parameter or allocated through the API framework.

  • When the sharedTmpBuffer input parameter is used for passing the temporary space, the tensor serves as the temporary space. In this case, the API framework is not required for temporary space allocation. This enables developers to manage the sharedTmpBuffer space and reuse the buffer after calling the API, so that the buffer is not repeatedly allocated and deallocated, improving the flexibility and buffer utilization.
  • When the API framework is used for temporary space allocation, developers do not need to allocate the space, but must reserve the required size for the space.

If sharedTmpBuffer is used, developers must allocate space for the tensor. If the API framework is used, developers must reserve the temporary space. To obtain the size of the temporary space (BufferSize) to be reserved, use the API provided in GetAsinhMaxMinTmpSize.

Parameters

Table 1 Parameters in the template

Parameter

Description

T

Data type of the operand.

isReuseSource

Whether the source operand can be modified. This parameter is reserved. Pass the default value false.

Table 2 API parameters

Parameter

Input/Output

Description

dstTensor

Output

Destination operand.

The type is LocalTensor, and the supported TPosition is VECIN, VECCALC, or VECOUT.

srcTensor

Input

Source operand.

The type is LocalTensor, and the supported TPosition is VECIN, VECCALC, or VECOUT.

The source operand must have the same data type as the destination operand.

sharedTmpBuffer

Input

Temporary buffer.

The type is LocalTensor, and the supported TPosition is VECIN, VECCALC, or VECOUT.

This parameter is used to store intermediate variables during complex computation in Asinh and is provided by developers.

For details about how to obtain the temporary space size (BufferSize), see GetAsinhMaxMinTmpSize.

calCount

Input

Number of actually computed data elements. The value range is [0, srcTensor.GetSize()].

Returns

None

Availability

Constraints

  • The value range of the input source data must be [–65504, 65504]. If the input is not within the range, the output is invalid.
  • The source operand address must not overlap the destination operand address.
  • sharedTmpBuffer must not overlap the addresses of the source operand and destination operand.
  • For details about the alignment requirements of the operand address offset, see General Restrictions.

Example

For details about the complete call example, see More Examples.

1
2
3
4
5
6
AscendC::TPipe pipe;
AscendC::TQue<AscendC::TPosition::VECCALC, 1> tmpQue;
pipe.InitBuffer(tmpQue, 1, bufferSize);  // bufferSize is obtained through the tiling parameter on the host.
AscendC::LocalTensor<uint8_t> sharedTmpBuffer = tmpQue.AllocTensor<uint8_t>();
// The input shape is 1024, the input data type of the operator is half, and the number of actually computed data elements is 512.
AscendC::Asinh(dstLocal, srcLocal, sharedTmpBuffer, 512);
Result example:
1
2
Input data (srcLocal): [0.80541134 0.08385705 0.49426016 ...  0.30962205 0.28947052]
Output data (dstLocal): [0.6344272 1.4868407 1.0538127  ...  1.2560008 1.2771227]