LeakyRelu

Function Usage

Computes Leaky ReLU on the source operand element-wise:

Leaky Rectified Linear Unit (Leaky ReLU) is a common activation function in artificial neural networks. Its mathematical expression is as follows:

The difference between ReLU and Leaky ReLU is that Leaky ReLU has a small slope for negative values, instead of altogether zero.

For Leaky ReLU, if the value of src is negative, dst equals src multiplied by scalar; if src is not negative, dst retains the value of src.

Prototype

  • Computation of the first n pieces of data of a tensor
    1
    2
    template <typename T, bool isSetMask = true>
    __aicore__ inline void LeakyRelu(const LocalTensor<T>& dstLocal, const LocalTensor<T>& srcLocal, const T& scalarValue, const int32_t& calCount)
    
  • High-dimensional tensor sharding computation
    • Bitwise mask mode
      1
      2
      template <typename T, bool isSetMask = true>
      __aicore__ inline void LeakyRelu(const LocalTensor<T>& dstLocal, const LocalTensor<T>& srcLocal, const T& scalarValue, uint64_t mask[], const uint8_t repeatTimes, const UnaryRepeatParams& repeatParams)
      
    • Contiguous mask mode
      1
      2
      template <typename T, bool isSetMask = true>
      __aicore__ inline void LeakyRelu(const LocalTensor<T>& dstLocal, const LocalTensor<T>& srcLocal, const T& scalarValue, uint64_t mask, const uint8_t repeatTimes, const UnaryRepeatParams& repeatParams)
      

When dstLocal and srcLocal use the TensorTrait type, the data types of TensorTrait and scalarValue (corresponding to the LiteType type in TensorTrait) are different. Therefore, the new template type U indicates the data type of scalarValue, and std::enable_if is used to check whether LiteType extracted from T is the same as U. If they are the same, the API passes the compilation. Otherwise, the compilation fails. The API prototype is defined as follows:

  • Computation of the first n pieces of data of a tensor
    1
    2
    template <typename T, typename U, bool isSetMask = true, typename std::enable_if<IsSameType<PrimT<T>, U>::value, bool>::type = true>
    __aicore__ inline void LeakyRelu(const LocalTensor<T>& dstLocal, const LocalTensor<T>& srcLocal, const U& scalarValue, const int32_t& calCount)
    
  • High-dimensional tensor sharding computation
    • Bitwise mask mode
      1
      2
      template <typename T, typename U, bool isSetMask = true, typename std::enable_if<IsSameType<PrimT<T>, U>::value, bool>::type = true>
      __aicore__ inline void LeakyRelu(const LocalTensor<T>& dstLocal, const LocalTensor<T>& srcLocal, const U& scalarValue, uint64_t mask[], const uint8_t repeatTimes, const UnaryRepeatParams& repeatParams)
      
    • Contiguous mask mode
      1
      2
      template <typename T, typename U, bool isSetMask = true, typename std::enable_if<IsSameType<PrimT<T>, U>::value, bool>::type = true>
      __aicore__ inline void LeakyRelu(const LocalTensor<T>& dstLocal, const LocalTensor<T>& srcLocal, const U& scalarValue, uint64_t mask, const uint8_t repeatTimes, const UnaryRepeatParams& repeatParams)
      

Parameters

Table 1 Parameters in the template

Parameter

Description

T

Operand data type.

U

Data type of scalarValue.

isSetMask

Whether to set the mask mode and mask value inside the API.

  • true: indicates that the settings are performed inside the API.

    The APIs for high-dimensional tensor sharding computation and for calculating the first n pieces of data in a tensor use the normal mode and counter mode of the mask. Generally, retain the default value of isSetMask, indicating that the mask mode and mask value are set in the API based on the mask and calCount parameters passed by the developer.

  • false: indicates that the settings are performed outside the API.
Table 2 Parameters

Parameter

Input/Output

Meaning

dstLocal

Output

Destination operand.

The type is LocalTensor, and the supported TPosition is VECIN, VECCALC, or VECOUT.

The start address of the LocalTensor must be 32-byte aligned.

srcLocal

Input

Source operand.

The type is LocalTensor, and the supported TPosition is VECIN, VECCALC, or VECOUT.

The start address of the LocalTensor must be 32-byte aligned.

Its data type must be the same as that of the destination operand.

scalarValue

Input

Source operand, and its data type must be the same as the element type of the tensor in the destination operand.

calCount

Input

Number of elements of the input data.

The parameter value range is related to the operand data type. The maximum number of elements that can be processed varies according to the data type.

The Vector Unit reads 256 bytes of contiguous data for computation each time. The unit needs to read and compute the input data in multiple repeats. Therefore, when the operand is of 16-bit, calCount ∈ [1, 128*255], where 255 indicates the maximum number of iterations, and 128 indicates that 128 pieces of 16-bit data can be processed in each iteration. When the operand is of 32-bit, calCount ∈ [1, 64*255], where 64 indicates that 64 pieces of 32-bit data can be processed in each iteration.

mask

Input

mask is used to control the elements that participate in computation in each iteration.

  • Contiguous mode: indicates the number of contiguous elements that participate in computation. The value range is related to the operand data type. The maximum number of elements that can be processed in each iteration varies according to the data type. When the operand is 16-bit, mask ∈ [1, 128]. When the operand is 32-bit, mask ∈ [1, 64]. When the operand is 64-bit, mask ∈ [1, 32].
  • Bitwise mode: controls the elements that participate in computation by bit. If a bit is set to 1, the corresponding element participates in the computation. If a bit is set to 0, the corresponding element is masked in the computation. The parameter type is a uint64_t array whose length is 2.

    For example, if mask = [0, 8] and 8 = 0b1000, only the fourth element participates in computation.

    The parameter value range is related to the operand data type. The maximum number of elements that can be processed in each iteration varies according to the data type. When the operand is 16-bit, mask[0] and mask[1] ∈ [0, 264 -1] and cannot be 0 at the same time. When the operand is 32-bit, mask[1] is 0 and mask[0] ∈ (0, 264 – 1]. When the operand is 64-bit, mask[1] is 0 and mask[0] ∈ (0, 232 – 1].

repeatTimes

Input

Number of iteration repeats. The Vector Unit reads 256 bytes of contiguous data for computation each time. To read the complete data for processing, the unit needs to read the input data in multiple repeats. repeatTimes indicates the number of iterations.

For details about this parameter, see Common Parameters.

repeatParams

Input

Control structure information of element operations. For details, see UnaryRepeatParams.

Returns

None

Availability

Precautions

  • To save memory space when using high-dimensional tensor sharding computation APIs, you can define a tensor shared by the source and destination operands (by address overlapping). The general instruction restrictions are as follows.
    • For a single repeat (repeatTimes = 1), the source operand must completely overlap the destination operand.
    • For multiple repeats (repeatTimes > 1), if there is a dependency between the source operand and the destination operand, that is, the destination operand of the Nth iteration is the source operand of the (N+1)th iteration, address overlapping is not allowed.
  • For details about the alignment requirements of the operand address offset, see General Restrictions.

Examples

  • Example of high-dimensional tensor sharding computation (contiguous mask mode)
    #include "kernel_operator.h"
    class KernelBinaryScalar {
    public:
        __aicore__ inline KernelBinaryScalar() {}
        __aicore__ inline void Init(__gm__ uint8_t* src, __gm__ uint8_t* dstGm)
        {
            srcGlobal.SetGlobalBuffer((__gm__ half*)src);
            dstGlobal.SetGlobalBuffer((__gm__ half*)dstGm);
            pipe.InitBuffer(inQueueSrc, 1, 512 * sizeof(half));
            pipe.InitBuffer(outQueueDst, 1, 512 * sizeof(half));
        }
        __aicore__ inline void Process()
        {
            CopyIn();
            Compute();
            CopyOut();
        }
    private:
        __aicore__ inline void CopyIn()
        {
            AscendC::LocalTensor<half> srcLocal = inQueueSrc.AllocTensor<half>();
            AscendC::DataCopy(srcLocal, srcGlobal, 512);
            inQueueSrc.EnQue(srcLocal);
        }
        __aicore__ inline void Compute()
        {
            AscendC::LocalTensor<half> srcLocal = inQueueSrc.DeQue<half>();
            AscendC::LocalTensor<half> dstLocal = outQueueDst.AllocTensor<half>();
     
            uint64_t mask = 128;
            half scalar = 2;
            // repeatTimes = 4, 128 elements one repeat, 512 elements total
            // dstBlkStride, srcBlkStride = 1, no gap between blocks in one repeat
            // dstRepStride, srcRepStride =8, no gap between repeats
            AscendC::LeakyRelu(dstLocal, srcLocal, scalar, mask, 4, {1, 1, 8, 8});
            
            outQueueDst.EnQue<half>(dstLocal);
            inQueueSrc.FreeTensor(srcLocal);
        }
        __aicore__ inline void CopyOut()
        {
            AscendC::LocalTensor<half> dstLocal = outQueueDst.DeQue<half>();
            AscendC::DataCopy(dstGlobal, dstLocal, 512);
            outQueueDst.FreeTensor(dstLocal);
        }
    private:
        AscendC::TPipe pipe;
        AscendC::TQue<AscendC::QuePosition::VECIN, 1> inQueueSrc;
        AscendC::TQue<AscendC::QuePosition::VECOUT, 1> outQueueDst;
        AscendC::GlobalTensor<half> srcGlobal, dstGlobal;
    };
    extern "C" __global__ __aicore__ void binary_scalar_simple_kernel(__gm__ uint8_t* src, __gm__ uint8_t* dstGm)
    {
        KernelBinaryScalar op;
        op.Init(src, dstGm);
        op.Process();
    }
  • Example of high-dimensional tensor sharding computation - bitwise mask mode (This example shows only part of the code used in the computation process (Compute). To run the example code, copy the code snippet and replace the content in bold of the Compute function in the example template.)
    uint64_t mask[2] = { UINT64_MAX, UINT64_MAX };
    int16_t scalar = 2;
    // repeatTimes = 4. 128 elements are processed in a single iteration. To compute 512 elements, four iterations are required.
    // dstBlkStride, srcBlkStride = 1. The interval between src0 data addresses involved in calculation in each iteration is one data block, indicating that data is continuously read and written in a single iteration.
    // dstRepStride, srcRepStride = 8. The interval between addresses of adjacent iterations is eight data blocks, indicating that data is continuously read and written between adjacent iterations.
    AscendC::LeakyRelu(dstLocal, srcLocal, scalar, mask, 4, {1, 1, 8, 8});
  • Example of computing the first n pieces of data of a tensor (This example shows only part of the code used in the computation process (Compute). To run the example code, copy the code snippet and replace the content in bold of the Compute function in the example template.)
    half scalar = 2;
    AscendC::LeakyRelu(dstLocal, srcLocal, scalar, 512);
Result example:
Input (src0Local): [1 2 3 ... 512]
Input (scalar): 2
Output (dstLocal): [1. 2. 3. ... 512.]