Not

Function Usage

Performs bitwise Not based on elements using the following formula, where PAR indicates the number of elements that can be processed by the Vector Unit in one iteration.

Prototype

  • Computation of the first n pieces of data of a tensor
    1
    2
    template <typename T>
    __aicore__ inline void Not(const LocalTensor<T>& dstLocal, const LocalTensor<T>& srcLocal, const int32_t& calCount)
    
  • High-dimensional tensor sharding computation
    • Bitwise mask mode
      1
      2
      template <typename T, bool isSetMask = true>
      __aicore__ inline void Not(const LocalTensor<T>& dstLocal, const LocalTensor<T>& srcLocal, uint64_t mask[], const uint8_t repeatTimes, const UnaryRepeatParams& repeatParams)
      
    • Contiguous mask mode
      1
      2
      template <typename T, bool isSetMask = true>
      __aicore__ inline void Not(const LocalTensor<T>& dstLocal, const LocalTensor<T>& srcLocal, uint64_t mask, const uint8_t repeatTimes, const UnaryRepeatParams& repeatParams)
      

Parameters

Table 1 Parameters in the template

Parameter

Description

T

Operand data type.

isSetMask

Indicates whether to set mask inside the API.

  • true: sets mask inside the API.
  • false: sets mask outside the API. Developers need to use the SetVectorMask API to set the mask value. In this mode, the mask value in the input parameter of this API must be set to MASK_PLACEHOLDER.
Table 2 Parameters

Parameter

Input/Output

Description

dstLocal

Output

Destination operand.

The type is LocalTensor, and the supported TPosition is VECIN, VECCALC, or VECOUT.

The start address of the LocalTensor must be 32-byte aligned.

For the Atlas Training Series Product , the supported data types are uint16_t and int16_t.

srcLocal

Input

Source operand.

The type is LocalTensor, and the supported TPosition is VECIN, VECCALC, or VECOUT.

The start address of the LocalTensor must be 32-byte aligned.

The source operand must have the same data type as the destination operand.

For the Atlas Training Series Product , the supported data types are uint16_t and int16_t.

calCount

Input

Number of elements of the input data.

mask

Input

mask is used to control the elements that participate in computation in each iteration.

  • Contiguous mode: indicates the number of contiguous elements that participate in computation. The value range is related to the operand data type. The maximum number of elements that can be processed in each iteration varies according to the data type. When the operand is 16-bit, mask ∈ [1, 128]. When the operand is 32-bit, mask ∈ [1, 64]. When the operand is 64-bit, mask ∈ [1, 32].
  • Bitwise mode: controls the elements that participate in computation by bit. If a bit is set to 1, the corresponding element participates in the computation. If a bit is set to 0, the corresponding element is masked in the computation. The parameter type is a uint64_t array whose length is 2.

    For example, if mask = [0, 8] and 8 = 0b1000, only the fourth element participates in computation.

    The parameter value range is related to the operand data type. The maximum number of elements that can be processed in each iteration varies according to the data type. When the operand is 16-bit, mask[0] and mask[1] ∈ [0, 264 -1] and cannot be 0 at the same time. When the operand is 32-bit, mask[1] is 0 and mask[0] ∈ (0, 264 – 1]. When the operand is 64-bit, mask[1] is 0 and mask[0] ∈ (0, 232 – 1].

repeatTimes

Input

Number of iteration repeats. The Vector Unit reads 256 bytes of contiguous data for computation each time. To read the complete data for processing, the unit needs to read the input data in multiple repeats. repeatTimes indicates the number of iterations.

For details about this parameter, see Common Parameters.

repeatParams

Input

Parameters that control the operand address strides. They are of the UnaryRepeatParams type, and contain such parameters as those that specify the address stride of the operand for the same data block between adjacent iterations and address stride of the operand between different data blocks in a single iteration.

For details about the address stride of the operand between adjacent iterations, see repeatStride. For details about the address stride of the operand between different data blocks in a single iteration, see dataBlockStride.

Returns

None

Availability

Atlas Training Series Product

Constraints

  • To save memory space, you can define a tensor shared by the source and destination operands (by address overlapping). The restrictions are as follows. Note that each instruction might have specific restrictions.
    • For a single repeat (repeatTimes = 1), the source operand must completely overlap the destination operand.
    • For multiple repeats (repeatTimes > 1), if there is a dependency between the source operand and the destination operand, that is, the destination operand of the Nth iteration is the source operand of the (N+1)th iteration, address overlapping is not allowed.
  • For details about the alignment requirements of the operand address offset, see General Restrictions.

Examples

This example shows only part of the code used in the computation process (Compute). In this example, srcLocal and dstLocal are of the int16_t type and occupy 16 bits.

To run the sample code, copy the code snippet and replace some code of the Compute function in Template Sample.

  • Example of high-dimensional tensor sharding computation (contiguous mask mode)
    1
    2
    3
    4
    5
    uint64_t mask = 256 / sizeof(int16_t);
    // repeatTimes = 4, 128 elements one repeat, 512 elements total
    // dstBlkStride, srcBlkStride = 1, no gap between blocks in one repeat
    // dstRepStride, srcRepStride = 8, no gap between repeats
    AscendC::Not(dstLocal, srcLocal, mask, 4, { 1, 1, 8, 8 });
    
  • Example of high-dimensional tensor sharding computation (bitwise mask mode)
    1
    2
    3
    4
    5
    uint64_t mask[2] = { UINT64_MAX, UINT64_MAX };
    // repeatTimes = 4, 128 elements one repeat, 512 elements total
    // dstBlkStride, srcBlkStride = 1, no gap between blocks in one repeat
    // dstRepStride, srcRepStride = 8, no gap between repeats
    AscendC::Not(dstLocal, srcLocal, mask, 4, { 1, 1, 8, 8 });
    
  • Example of computing the first n pieces of data of a tensor
    1
    AscendC::Not(dstLocal, srcLocal, 512);
    
Result example:
Input (srcLocal): [9 -2 8 ... 9 0]
Output (dstLocal):
[-10 1 -9 ... -10 -1]