Pad
Function Usage
Pads a two-dimensional tensor (height * width) up to be 32-byte aligned in the width direction. If the width of the tensor is already 32-byte aligned and all data is valid, this API should not be called for padding. This API applies to the following scenarios:
- Scenario 1
The width of the tensor is not 32-byte aligned. Take half for padding as an example, such as 16 x 15. Add a column to the right to obtain 16 x 16.
- Scenario 2
The width of the tensor is already 32-byte aligned, but there is some redundant data. Take half as an example, such as 16 x 16 (the last two columns are redundant data). After padding, 16 x 16 remains unchanged, but the redundant data in the last two columns can be padded with configured values.
Prototype
Due to the complex computation involved in the internal implementation of this API, additional temporary space is required to store intermediate variables generated during computation. The method of obtaining the temporary space size (BufferSize) is as follows: Obtain the required maximum and minimum temporary space sizes using the GetPadMaxMinTmpSize API provided in Pad Tiling. The minimum space can ensure correct functionality, while the maximum space is used to improve performance.
The temporary space can be allocated through the API framework or passed by developers through the sharedTmpBuffer input parameter. Therefore, there are two types of function prototypes for the Pad API.
- Pass the temporary space through the sharedTmpBuffer input parameter.
1 2
template <typename T> __aicore__ inline void Pad(const LocalTensor<T> &dstTensor, const LocalTensor<T> &srcTensor, PadParams &padParams, const LocalTensor<uint8_t> &sharedTmpBuffer, PadTiling &tiling)
This method enables developers to allocate and manage the temporary memory space on their own, and reuse the buffer after calling the API, so that the buffer is not repeatedly allocated and deallocated, improving the flexibility and buffer utilization.
- Allocate the temporary space through the API framework.
1 2
template <typename T> __aicore__ inline void Pad(const LocalTensor<T> &dstTensor, const LocalTensor<T> &srcTensor, PadParams &padParams, PadTiling &tiling)
When using this method, developers do not need to allocate the space, but must reserve the required size for the space.
Parameters
Parameter |
Function |
|---|---|
T |
Data type of an operand. |
Parameter |
Input/Output |
Meaning |
||
|---|---|---|---|---|
dstTensor |
Output |
Destination operand, with a type of LocalTensor and a two-dimensional shape. For details about the definition of the LocalTensor data structure, see LocalTensor. |
||
srcTensor |
Input |
Source operand, with a type of LocalTensor and a two-dimensional shape. For details about the definition of the LocalTensor data structure, see LocalTensor. |
||
padParams |
Input |
Pad parameters of the PadParams type. The specific parameters of the PadParams structure are as follows (left and right padding are not supported at the same time):
The definition of the PadParams structure is as follows:
|
||
sharedTmpBuffer |
Input |
Shared buffer, which is used to store temporary data generated during internal API computation. This enables developers to manage the sharedTmpBuffer space and reuse the buffer after calling the API, so that the buffer is not repeatedly allocated and deallocated, improving the flexibility and buffer utilization. For details about how to obtain the size of the shared buffer, see Pad Tiling. The type is LocalTensor, and the supported TPosition is VECIN, VECCALC, or VECOUT. |
||
tiling |
Input |
Tiling information required for computation. For details about how to obtain the tiling information, see Pad Tiling. |
Returns
None
Availability
Precautions
- In scenario 1, padding can be performed on both the left and right simultaneously.
- In scenario 2, padding can be performed only on the right.
- The total width after padding cannot exceed the width obtained after the original width is aligned to the nearest 32 bytes.
- For details about the alignment requirements of the operand address offset, see General Restrictions.
Examples
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 | #include "kernel_operator.h" template <typename T> class KernelPad { public: __aicore__ inline KernelPad() {} __aicore__ inline void Init(GM_ADDR dstGm, GM_ADDR srcGm, uint32_t heightIn, uint32_t widthIn, uint32_t oriWidthIn, AscendC::PadParams &padParamsIn, const PadTiling &tilingData) { height = heightIn; width = widthIn; oriWidth = oriWidthIn; padParams = padParamsIn; srcGlobal.SetGlobalBuffer((__gm__ T *)srcGm); dstGlobal.SetGlobalBuffer((__gm__ T *)dstGm); pipe.InitBuffer(inQueueSrcVecIn, 1, height * width * sizeof(T)); alignedWidth = ((width * sizeof(T) - 1) / 32 + 1) * 32 / sizeof(T); pipe.InitBuffer(inQueueSrcVecOut, 1, height * alignedWidth * sizeof(T)); tiling = tilingData; } __aicore__ inline void Process() { CopyIn(); Compute(); CopyOut(); } private: __aicore__ inline void CopyIn() { AscendC::LocalTensor<T> srcLocal = inQueueSrcVecIn.AllocTensor<T>(); AscendC::DataCopy(srcLocal, srcGlobal, height * width); inQueueSrcVecIn.EnQue(srcLocal); } __aicore__ inline void Compute() { AscendC::LocalTensor<T> dstLocal = inQueueSrcVecIn.DeQue<T>(); AscendC::LocalTensor<T> srcOutLocal = inQueueSrcVecOut.AllocTensor<T>(); AscendC::Pad(srcOutLocal, dstLocal, padParams, tiling); inQueueSrcVecOut.EnQue(srcOutLocal); inQueueSrcVecIn.FreeTensor(dstLocal); } __aicore__ inline void CopyOut() { AscendC::LocalTensor<T> srcOutLocalDe = inQueueSrcVecOut.DeQue<T>(); AscendC::DataCopy(dstGlobal, srcOutLocalDe, height * alignedWidth); inQueueSrcVecOut.FreeTensor(srcOutLocalDe); } private: AscendC::TPipe pipe; AscendC::TQue<AscendC::QuePosition::VECIN, 1> inQueueSrcVecIn; AscendC::TQue<AscendC::QuePosition::VECOUT, 1> inQueueSrcVecOut; AscendC::GlobalTensor<T> srcGlobal; AscendC::GlobalTensor<T> dstGlobal; uint32_t height; uint32_t width; uint32_t oriWidth; uint32_t alignedWidth; AscendC::PadParams padParams; PadTiling tiling; }; extern "C" __global__ __aicore__ void kernel_pad_half_16_15_15(GM_ADDR src_gm, GM_ADDR dst_gm, __gm__ uint8_t *tiling) { GET_TILING_DATA(tilingData, tiling); KernelPad<half> op; AscendC::PadParams padParams{0, 1, 321}; op.Init(dst_gm, src_gm, 16, 15, 15, padParams, tilingData.padTilingData); op.Process(); } |
Input data: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 Output data: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 321 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 321 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 321 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 321 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 321 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 321 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 321 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 321 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 321 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 321 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 321 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 321 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 321 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 321 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 321 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 321