Div
Function Usage
Performs division based on elements using the following formula, where PAR indicates the number of elements that can be processed by the Vector Unit in one iteration.

Prototype
- Computation of the entire tensor
1dstLocal = src0Local / src1Local;
- Computation of the first n pieces of data of a tensor
1 2
template <typename T> __aicore__ inline void Div(const LocalTensor<T>& dstLocal, const LocalTensor<T>& src0Local, const LocalTensor<T>& src1Local, const int32_t& calCount)
- High-dimensional tensor sharding computation
- Bitwise mask mode
1 2
template <typename T, bool isSetMask = true> __aicore__ inline void Div(const LocalTensor<T>& dstLocal, const LocalTensor<T>& src0Local, const LocalTensor<T>& src1Local, uint64_t mask[], const uint8_t repeatTimes, const BinaryRepeatParams& repeatParams)
- Contiguous mask mode
1 2
template <typename T, bool isSetMask = true> __aicore__ inline void Div(const LocalTensor<T>& dstLocal, const LocalTensor<T>& src0Local, const LocalTensor<T>& src1Local, uint64_t mask, const uint8_t repeatTimes, const BinaryRepeatParams& repeatParams)
- Bitwise mask mode
Parameters
|
Parameter |
Description |
|---|---|
|
T |
Operand data type. |
|
isSetMask |
Indicates whether to set mask inside the API.
|
|
Parameter |
Input/Output |
Description |
|---|---|---|
|
dstLocal |
Output |
Destination operand. The type is LocalTensor, and the supported TPosition is VECIN, VECCALC, or VECOUT. The start address of the LocalTensor must be 32-byte aligned. For the |
|
src0Local and src1Local |
Input |
Source operand. The type is LocalTensor, and the supported TPosition is VECIN, VECCALC, or VECOUT. The start address of the LocalTensor must be 32-byte aligned. The two source operands must have the same data type as the destination operand. For the |
|
calCount |
Input |
Number of elements of the input data. |
|
mask |
Input |
mask is used to control the elements that participate in computation in each iteration.
|
|
repeatTimes |
Input |
Number of iteration repeats. The Vector Unit reads 256 bytes of contiguous data for computation each time. To read the complete data for processing, the unit needs to read the input data in multiple repeats. repeatTimes indicates the number of iterations. |
|
repeatParams |
Input |
Parameters that control the operand address strides. They are of the BinaryRepeatParams type, and contain such parameters as those that specify the address stride of the operand for the same data block between adjacent iterations and address stride of the operand between different data blocks in a single iteration. For details about the address stride of the operand between adjacent iterations, see repeatStride. For details about the address stride of the operand between different data blocks in a single iteration, see dataBlockStride. |
Returns
None
Availability
Precautions
- To save memory space when using high-dimensional tensor sharding computation APIs, you can define a tensor shared by the source and destination operands (by address overlapping). The general instruction restrictions are as follows.
- In a single iteration, the source operand must completely overlap the destination operand. Partial overlapping is not supported.
- During multiple iterations, if the Nth destination operand is the (N+1)th source operand, address overlapping is not supported because the (N+1)th destination operand depends on the Nth result.
- When the entire tensor computation API is used for symbol overloading, the computation workload is the total length of the destination LocalTensor.
- Pay attention to division by zero errors.
- For details about the alignment requirements of the operand address offset, see General Restrictions.
Examples
- Example of high-dimensional tensor sharding computation (contiguous mask mode)
#include "kernel_operator.h" class KernelDiv { public: __aicore__ inline KernelDiv() {} __aicore__ inline void Init(__gm__ uint8_t* src0Gm, __gm__ uint8_t* src1Gm, __gm__ uint8_t* dstGm) { src0Global.SetGlobalBuffer((__gm__ half*)src0Gm); src1Global.SetGlobalBuffer((__gm__ half*)src1Gm); dstGlobal.SetGlobalBuffer((__gm__ half*)dstGm); pipe.InitBuffer(inQueueSrc0, 1, 512 * sizeof(half)); pipe.InitBuffer(inQueueSrc1, 1, 512 * sizeof(half)); pipe.InitBuffer(outQueueDst, 1, 512 * sizeof(half)); } __aicore__ inline void Process() { CopyIn(); Compute(); CopyOut(); } private: __aicore__ inline void CopyIn() { AscendC::LocalTensor<half> src0Local = inQueueSrc0.AllocTensor<half>(); AscendC::LocalTensor<half> src1Local = inQueueSrc1.AllocTensor<half>(); AscendC::DataCopy(src0Local, src0Global, 512); AscendC::DataCopy(src1Local, src1Global, 512); inQueueSrc0.EnQue(src0Local); inQueueSrc1.EnQue(src1Local); } __aicore__ inline void Compute() { AscendC::LocalTensor<half> src0Local = inQueueSrc0.DeQue<half>(); AscendC::LocalTensor<half> src1Local = inQueueSrc1.DeQue<half>(); AscendC::LocalTensor<half> dstLocal = outQueueDst.AllocTensor<half>(); uint64_t mask = 128; AscendC::Div(dstLocal, src0Local, src1Local, mask, 4, { 1, 1, 1, 8, 8, 8 }); outQueueDst.EnQue<half>(dstLocal); inQueueSrc0.FreeTensor(src0Local); inQueueSrc1.FreeTensor(src1Local); } __aicore__ inline void CopyOut() { AscendC::LocalTensor<half> dstLocal = outQueueDst.DeQue<half>(); AscendC::DataCopy(dstGlobal, dstLocal, 512); outQueueDst.FreeTensor(dstLocal); } private: AscendC::TPipe pipe; AscendC::TQue<AscendC::QuePosition::VECIN, 1> inQueueSrc0, inQueueSrc1; AscendC::TQue<AscendC::QuePosition::VECOUT, 1> outQueueDst; AscendC::GlobalTensor<half> src0Global, src1Global, dstGlobal; }; extern "C" __global__ __aicore__ void div_simple_kernel(__gm__ uint8_t* src0Gm, __gm__ uint8_t* src1Gm, __gm__ uint8_t* dstGm) { KernelDiv op; op.Init(src0Gm, src1Gm, dstGm); op.Process(); } - Example of high-dimensional tensor sharding computation - bitwise mask mode (This example shows only part of the code used in the computation process (Compute). To run the example code, copy the code snippet and replace the content in bold of the Compute function in the example template.)
uint64_t mask[2] = { UINT64_MAX, UINT64_MAX }; // repeatTimes = 4. 128 elements are computed in one iteration, and 512 elements are computed in total. // dstBlkStride, src0BlkStride, src1BlkStride = 1. Data is continuously read and written in a single iteration. // dstRepStride, src0RepStride, src1RepStride = 8. Data is continuously read and written between adjacent iterations. AscendC::Div(dstLocal, src0Local, src1Local, mask, 4, { 1, 1, 1, 8, 8, 8 }); - Example of computing the first n pieces of data of a tensor (This example shows only part of the code used in the computation process (Compute). To run the example code, copy the code snippet and replace the content in bold of the Compute function in the example template.)
AscendC::Div(dstLocal, src0Local, src1Local, 512);
- Example of entire tensor computation (This example shows only part of the code used in the computation process (Compute). To run the example code, copy the code snippet and replace the content in bold of the Compute function in the example template.)
dstLocal = src0Local / src1Local;
Input (src0Local): [1.0 2.0 3.0 ... 512.0] Input (src1Local): [2.0 2.0 2.0 ... 2.0] Output (dstLocal): [0.5 1.0 1.5 ... 256.0]