Not
Function Usage
Performs bitwise Not based on elements using the following formula, where PAR indicates the number of elements that can be processed by the Vector Unit in one iteration.

Prototype
- Computation of the first n pieces of data of a tensor
1 2
template <typename T> __aicore__ inline void Not(const LocalTensor<T>& dstLocal, const LocalTensor<T>& srcLocal, const int32_t& calCount)
- High-dimensional tensor sharding computation
- Bitwise mask mode
1 2
template <typename T, bool isSetMask = true> __aicore__ inline void Not(const LocalTensor<T>& dstLocal, const LocalTensor<T>& srcLocal, uint64_t mask[], const uint8_t repeatTimes, const UnaryRepeatParams& repeatParams)
- Contiguous mask mode
1 2
template <typename T, bool isSetMask = true> __aicore__ inline void Not(const LocalTensor<T>& dstLocal, const LocalTensor<T>& srcLocal, uint64_t mask, const uint8_t repeatTimes, const UnaryRepeatParams& repeatParams)
- Bitwise mask mode
Parameters
|
Parameter |
Description |
|---|---|
|
T |
Operand data type. |
|
isSetMask |
Indicates whether to set mask inside the API.
|
|
Parameter |
Input/Output |
Description |
|---|---|---|
|
dstLocal |
Output |
Destination operand. The type is LocalTensor, and the supported TPosition is VECIN, VECCALC, or VECOUT. The start address of the LocalTensor must be 32-byte aligned. For the |
|
srcLocal |
Input |
Source operand. The type is LocalTensor, and the supported TPosition is VECIN, VECCALC, or VECOUT. The start address of the LocalTensor must be 32-byte aligned. The source operand must have the same data type as the destination operand. For the |
|
calCount |
Input |
Number of elements of the input data. |
|
mask |
Input |
mask is used to control the elements that participate in computation in each iteration.
|
|
repeatTimes |
Input |
Number of iteration repeats. The Vector Unit reads 256 bytes of contiguous data for computation each time. To read the complete data for processing, the unit needs to read the input data in multiple repeats. repeatTimes indicates the number of iterations. |
|
repeatParams |
Input |
Parameters that control the operand address strides. They are of the UnaryRepeatParams type, and contain such parameters as those that specify the address stride of the operand for the same data block between adjacent iterations and address stride of the operand between different data blocks in a single iteration. For details about the address stride of the operand between adjacent iterations, see repeatStride. For details about the address stride of the operand between different data blocks in a single iteration, see dataBlockStride. |
Returns
None
Availability
Constraints
- To save memory space, you can define a tensor shared by the source and destination operands (by address overlapping). The restrictions are as follows. Note that each instruction might have specific restrictions.
- For a single repeat (repeatTimes = 1), the source operand must completely overlap the destination operand.
- For multiple repeats (repeatTimes > 1), if there is a dependency between the source operand and the destination operand, that is, the destination operand of the Nth iteration is the source operand of the (N+1)th iteration, address overlapping is not allowed.
- For details about the alignment requirements of the operand address offset, see General Restrictions.
Examples
This example shows only part of the code used in the computation process (Compute). In this example, srcLocal and dstLocal are of the int16_t type and occupy 16 bits.
To run the sample code, copy the code snippet and replace some code of the Compute function in Template Sample.
- Example of high-dimensional tensor sharding computation (contiguous mask mode)
1 2 3 4 5
uint64_t mask = 256 / sizeof(int16_t); // repeatTimes = 4, 128 elements one repeat, 512 elements total // dstBlkStride, srcBlkStride = 1, no gap between blocks in one repeat // dstRepStride, srcRepStride = 8, no gap between repeats AscendC::Not(dstLocal, srcLocal, mask, 4, { 1, 1, 8, 8 });
- Example of high-dimensional tensor sharding computation (bitwise mask mode)
1 2 3 4 5
uint64_t mask[2] = { UINT64_MAX, UINT64_MAX }; // repeatTimes = 4, 128 elements one repeat, 512 elements total // dstBlkStride, srcBlkStride = 1, no gap between blocks in one repeat // dstRepStride, srcRepStride = 8, no gap between repeats AscendC::Not(dstLocal, srcLocal, mask, 4, { 1, 1, 8, 8 });
- Example of computing the first n pieces of data of a tensor
1AscendC::Not(dstLocal, srcLocal, 512);
Input (srcLocal): [9 -2 8 ... 9 0] Output (dstLocal): [-10 1 -9 ... -10 -1]