General Definition

Description

This is a generic format for an instruction with only one source operand.

Note that this section describes only the general format and gives no out-of-box instruction code.

Prototype

instruction (mask, dst, src, repeat_times, dst_rep_stride, src_rep_stride)

PIPE: VECTOR

Parameters

Table 1 Parameter description

Parameter

Input/Output

Description

instruction

Input

A string specifying the instruction name. Only lowercase letters are supported in TIK DSL.

mask

Input

128-bit mask. If a bit is set to 1, the corresponding element of the Vector participates in the computation; if a bit is set to 0, otherwise. In the current version, the Vector Unit can calculate a maximum of 256 bytes at once.

This mask mode takes two forms: contiguous mask and bitwise mask.

  • To set a contiguous mask mode, pass a single Scalar or Python immediate to indicate the number of first leading contiguous valid elements. For example, mask = 16 means that the first 16 elements participate in the computation. The argument must be an immediate of type int or a Scalar of type int64/int32/int16.

    Value range: When dst and src are 16 bits, mask ∈ [1, 128]. When dst and src are 32 bits, mask ∈ [1, 64]. When dst and src are 64 bits, mask ∈ [1, 32].

  • To set a bitwise mask mode, pass a list of two Scalars (int64) or two immediates (int64), formatted as [mask_h, mask_l]. If a bit is set to 1, the corresponding element of the Vector participates in the computation; if a bit is set to 0, otherwise. mask_h corresponds to the upper 64 elements, and mask_l corresponds to the lower 64 elements. For example, mask = [0, 8] (8 = 0b1000) means that only the fourth element participates in the computation.

    Value range: When dst and src are 16 bits, mask_h and mask_l ∈ [0, 2**64 – 1]. When dst and src are 32 bits, mask_h = 0 and mask_l ∈ [0, 2**64 – 1]. When dst and src are 64 bits, mask_h = 0 and mask_l ∈ [0, 2**32 – 1].

Note: mask applies to the source operand of each repeat.

dst

Output

Destination operand, which points to the start element of the tensor. The supported data types vary depending on the specific instruction.

The scope of the tensor is the Unified Buffer.

src

Input

Source operand, which points to the start element of the tensor. The supported data types vary depending on the specific instruction.

The scope of the tensor is the Unified Buffer.

repeat_times

Input

Repeat times (or iterations).

dst_rep_stride

Input

Repeat stride size for the destination operand between the corresponding blocks of successive iterations.

src_rep_stride

Input

Repeat stride size for the source operand between the corresponding blocks of successive iterations.

Restrictions

  • repeat_times ∈ [0, 255]. Must be a Scalar of type int16/int32/int64/uint16/uint32/uint64, an immediate of type int (other than 0), or an Expr of type int16/int32/int64/uint16/uint32/uint64.
  • The parallelism degree in each repeat depends on the data type and SoC version. The following uses PAR to represent the parallelism degree.
  • dst_rep_stride and src_rep_stride , in the unit of 32 bytes. Must be a Scalar of type int16/int32/int64/uint16/uint32/uint64, an immediate of type int, or an Expr of type int16/int32/int64/uint16/uint32/uint64.
  • dst and src must be declared in scope_ubuf, and the supported data types are related to the chip version. If the data types are not supported, the tool reports an error.
  • dst has the same data type as src.
  • To save memory space, you can define a tensor shared by the source and destination operands (by address overlapping). The general instruction restrictions are as follows. Note that each instruction might have specific restrictions.
    • In the event of a single repeat (repeat_times = 1), the source operand must completely overlap the destination operand.
    • In the event of multiple repeats (repeat_times > 1), if there is a dependency between the source operand and the destination operand, that is, the destination operand of the Nth iteration is the source operand of the (N+1)th iteration, address overlapping is not allowed.
  • For details about the alignment requirements of the operand address offset, see General Restrictions.

Example

  • Example of contiguous data operations
    from tbe import tik
    tik_instance = tik.Tik()
    dtype_size = {
        "int8": 1,
        "uint8": 1,
        "int16": 2,
        "uint16": 2,
        "float16": 2,
        "int32": 4,
        "uint32": 4,
        "float32": 4,
        "int64": 8,
    }
    shape = (2, 512)
    dtype = "float16"
    elements = 2 * 512
    # Number of operations per iteration, which is 128 in the current example. In bitwise mode, mask can be represented as [2**64-1, 2**64-1].
    mask =128
    # Number of iterations, which is 8 in the current example. You can adjust the number of iterations as required.
    repeat_times = 8
    # Iteration stride between the previous repeat header and the next repeat header of the destination operand. The unit is 32 bytes. In the current example, the destination operand is placed contiguously. If data does not need to be processed contiguously, adjust the corresponding parameter.
    dst_rep_stride = 8
    # Iteration stride between the previous repeat header and the next repeat header of the source operand. The unit is 32 bytes. In the current example, the source operand is read contiguously. If data does not need to be processed contiguously, adjust the corresponding parameter.
    src_rep_stride = 8
    
    src_gm = tik_instance.Tensor(dtype, shape, name="src_gm", scope=tik.scope_gm)
    dst_gm = tik_instance.Tensor(dtype, shape, name="dst_gm", scope=tik.scope_gm)
    src_ub = tik_instance.Tensor(dtype, shape, name="src_ub", scope=tik.scope_ubuf)
    dst_ub = tik_instance.Tensor(dtype, shape, name="dst_ub", scope=tik.scope_ubuf)
    # Number of moved segments.
    nburst = 1
    # Length of the moved segment each time, in 32 bytes.
    burst = elements * dtype_size[dtype] // 32 // nburst
    # Stride between the previous burst tail and the next burst header, in 32 bytes.
    dst_stride, src_stride = 0, 0
    # Copy the user input to the source Unified Buffer.
    tik_instance.data_move(src_ub, src_gm, 0, nburst, burst, src_stride, dst_stride)
    tik_instance.vec_relu(mask, dst_ub, src_ub, repeat_times, dst_rep_stride, src_rep_stride)
    # Copy the compute result to the destination Global Memory.
    tik_instance.data_move(dst_gm, dst_ub, 0, nburst, burst, src_stride, dst_stride)
    tik_instance.BuildCCE(kernel_name="vec_relu", inputs=[src_gm], outputs=[dst_gm])
    
    Result example:
    Input (src_gm):
    [[ 0.000e+00  1.000e+00 -2.000e+00  3.000e+00 -4.000e+00  5.000e+00
      -6.000e+00  7.000e+00 -8.000e+00  9.000e+00 -1.000e+01  1.100e+01
      -1.200e+01  1.300e+01 -1.400e+01  1.500e+01 -1.600e+01  1.700e+01
      -1.800e+01  1.900e+01 -2.000e+01  2.100e+01 -2.200e+01  2.300e+01
      -2.400e+01  2.500e+01 -2.600e+01  2.700e+01 -2.800e+01  2.900e+01
      -3.000e+01  3.100e+01 -3.200e+01  3.300e+01 -3.400e+01  3.500e+01
      -3.600e+01  3.700e+01 -3.800e+01  3.900e+01 -4.000e+01  4.100e+01
      -4.200e+01  4.300e+01 -4.400e+01  4.500e+01 -4.600e+01  4.700e+01
      -4.800e+01  4.900e+01 -5.000e+01  5.100e+01 -5.200e+01  5.300e+01
      -5.400e+01  5.500e+01 -5.600e+01  5.700e+01 -5.800e+01  5.900e+01
      -6.000e+01  6.100e+01 -6.200e+01  6.300e+01 -6.400e+01  6.500e+01
      ...
      -1.010e+03  1.011e+03 -1.012e+03  1.013e+03 -1.014e+03  1.015e+03
      -1.016e+03  1.017e+03 -1.018e+03  1.019e+03 -1.020e+03  1.021e+03
      -1.022e+03  1.023e+03]]
    
    Output (dst_gm):
    [[0.000e+00 1.000e+00 0.000e+00 3.000e+00 0.000e+00 5.000e+00 0.000e+00
      7.000e+00 0.000e+00 9.000e+00 0.000e+00 1.100e+01 0.000e+00 1.300e+01
      0.000e+00 1.500e+01 0.000e+00 1.700e+01 0.000e+00 1.900e+01 0.000e+00
      2.100e+01 0.000e+00 2.300e+01 0.000e+00 2.500e+01 0.000e+00 2.700e+01
      0.000e+00 2.900e+01 0.000e+00 3.100e+01 0.000e+00 3.300e+01 0.000e+00
      3.500e+01 0.000e+00 3.700e+01 0.000e+00 3.900e+01 0.000e+00 4.100e+01
      0.000e+00 4.300e+01 0.000e+00 4.500e+01 0.000e+00 4.700e+01 0.000e+00
      4.900e+01 0.000e+00 5.100e+01 0.000e+00 5.300e+01 0.000e+00 5.500e+01
      0.000e+00 5.700e+01 0.000e+00 5.900e+01 0.000e+00 6.100e+01 0.000e+00
      6.300e+01 0.000e+00 6.500e+01 
      ...
      1.009e+03 0.000e+00 1.011e+03 0.000e+00 1.013e+03 0.000e+00 1.015e+03
      0.000e+00 1.017e+03 0.000e+00 1.019e+03 0.000e+00 1.021e+03 0.000e+00
      1.023e+03]]
  • Example of discontiguous data operations
    """
    There are 320 source operands. You need to obtain the absolute values of the source operands, and place the data at an interval of 32 operands for every 32 operands.
    """
    from tbe import tik
    tik_instance = tik.Tik()
    dtype_size = {
        "int8": 1,
        "uint8": 1,
        "int16": 2,
        "uint16": 2,
        "float16": 2,
        "int32": 4,
        "uint32": 4,
        "float32": 4,
        "int64": 8,
    }
    
    shape = (10, 32)
    dst_shape = (10, 64)
    dtype = "float16"
    elements = 10 * 32
    dst_elements = 10 * 64
    # Number of operations per iteration, which is 32 in the current example. In bitwise mode, mask can be represented as [0, 2**32-1].
    mask = 32
    # Number of iterations, which is 10 in the current example. You can adjust the number of iterations as required.
    repeat_times = 10
    # Iteration stride between the previous repeat header and the next repeat header of the destination operand. The unit is 32 bytes. Because there are 32 operations in each iteration and every 32 operands are interspaced with another 32 operands, the destination operand needs to be placed at an iteration interval of four blocks.
    dst_rep_stride = 4
    # Iteration stride between the previous repeat header and the next repeat header of the source operand. The unit is 32 bytes. Because there are 32 operations in each iteration, the source operand needs to be read at an iteration interval of two blocks.
    src_rep_stride = 2
    src_gm = tik_instance.Tensor(dtype, shape, name="src_gm", scope=tik.scope_gm)
    dst_gm = tik_instance.Tensor(dtype, dst_shape, name="dst_gm", scope=tik.scope_gm)
    src_ub = tik_instance.Tensor(dtype, shape, name="src_ub", scope=tik.scope_ubuf)
    dst_ub = tik_instance.Tensor(dtype, dst_shape, name="dst_ub", scope=tik.scope_ubuf)
    # Number of moved segments.
    nburst = 1
    # Length of the moved segment each time, in 32 bytes.
    burst = elements * dtype_size[dtype] // 32 // nburst
    # Stride between the previous burst tail and the next burst header, in 32 bytes.
    dst_stride, src_stride = 0, 0
    # Copy the user input to the source Unified Buffer.
    tik_instance.data_move(src_ub, src_gm, 0, nburst, burst, src_stride, dst_stride)
    # Initialize dst_ub to 0.
    tik_instance.vec_dup(64, dst_ub, 0, 10, 4)
    # Execute the vec_abs instruction.
    tik_instance.vec_abs(mask, dst_ub, src_ub, repeat_times, dst_rep_stride, src_rep_stride)
    # Copy the compute result to the destination Global Memory.
    dst_burst = dst_elements * dtype_size[dtype] // 32 // nburst
    tik_instance.data_move(dst_gm, dst_ub, 0, nburst, dst_burst, src_stride, dst_stride)
    tik_instance.BuildCCE(kernel_name="vec_abs", inputs=[src_gm], outputs=[dst_gm])
    
    Result example:
    Input (src_gm):
    [[   0.   -1.   -2.   -3.   -4.   -5.   -6.   -7.   -8.   -9.  -10.  -11.
       -12.  -13.  -14.  -15.  -16.  -17.  -18.  -19.  -20.  -21.  -22.  -23.
       -24.  -25.  -26.  -27.  -28.  -29.  -30.  -31.]
     [ -32.  -33.  -34.  -35.  -36.  -37.  -38.  -39.  -40.  -41.  -42.  -43.
       -44.  -45.  -46.  -47.  -48.  -49.  -50.  -51.  -52.  -53.  -54.  -55.
       -56.  -57.  -58.  -59.  -60.  -61.  -62.  -63.]
     [ -64.  -65.  -66.  -67.  -68.  -69.  -70.  -71.  -72.  -73.  -74.  -75.
       -76.  -77.  -78.  -79.  -80.  -81.  -82.  -83.  -84.  -85.  -86.  -87.
       -88.  -89.  -90.  -91.  -92.  -93.  -94.  -95.]
     [ -96.  -97.  -98.  -99. -100. -101. -102. -103. -104. -105. -106. -107.
      -108. -109. -110. -111. -112. -113. -114. -115. -116. -117. -118. -119.
      -120. -121. -122. -123. -124. -125. -126. -127.]
     [-128. -129. -130. -131. -132. -133. -134. -135. -136. -137. -138. -139.
      -140. -141. -142. -143. -144. -145. -146. -147. -148. -149. -150. -151.
      -152. -153. -154. -155. -156. -157. -158. -159.]
     [-160. -161. -162. -163. -164. -165. -166. -167. -168. -169. -170. -171.
      -172. -173. -174. -175. -176. -177. -178. -179. -180. -181. -182. -183.
      -184. -185. -186. -187. -188. -189. -190. -191.]
     [-192. -193. -194. -195. -196. -197. -198. -199. -200. -201. -202. -203.
      -204. -205. -206. -207. -208. -209. -210. -211. -212. -213. -214. -215.
      -216. -217. -218. -219. -220. -221. -222. -223.]
     [-224. -225. -226. -227. -228. -229. -230. -231. -232. -233. -234. -235.
      -236. -237. -238. -239. -240. -241. -242. -243. -244. -245. -246. -247.
      -248. -249. -250. -251. -252. -253. -254. -255.]
     [-256. -257. -258. -259. -260. -261. -262. -263. -264. -265. -266. -267.
      -268. -269. -270. -271. -272. -273. -274. -275. -276. -277. -278. -279.
      -280. -281. -282. -283. -284. -285. -286. -287.]
     [-288. -289. -290. -291. -292. -293. -294. -295. -296. -297. -298. -299.
      -300. -301. -302. -303. -304. -305. -306. -307. -308. -309. -310. -311.
      -312. -313. -314. -315. -316. -317. -318. -319.]]
    Output (dst_gm):
    [[  0.   1.   2.   3.   4.   5.   6.   7.   8.   9.  10.  11.  12.  13.
       14.  15.  16.  17.  18.  19.  20.  21.  22.  23.  24.  25.  26.  27.
       28.  29.  30.  31.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.
        0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.
        0.   0.   0.   0.   0.   0.   0.   0.]
     [ 32.  33.  34.  35.  36.  37.  38.  39.  40.  41.  42.  43.  44.  45.
       46.  47.  48.  49.  50.  51.  52.  53.  54.  55.  56.  57.  58.  59.
       60.  61.  62.  63.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.
        0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.
        0.   0.   0.   0.   0.   0.   0.   0.]
     [ 64.  65.  66.  67.  68.  69.  70.  71.  72.  73.  74.  75.  76.  77.
       78.  79.  80.  81.  82.  83.  84.  85.  86.  87.  88.  89.  90.  91.
       92.  93.  94.  95.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.
        0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.
        0.   0.   0.   0.   0.   0.   0.   0.]
     [ 96.  97.  98.  99. 100. 101. 102. 103. 104. 105. 106. 107. 108. 109.
      110. 111. 112. 113. 114. 115. 116. 117. 118. 119. 120. 121. 122. 123.
      124. 125. 126. 127.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.
        0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.
        0.   0.   0.   0.   0.   0.   0.   0.]
     [128. 129. 130. 131. 132. 133. 134. 135. 136. 137. 138. 139. 140. 141.
      142. 143. 144. 145. 146. 147. 148. 149. 150. 151. 152. 153. 154. 155.
      156. 157. 158. 159.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.
        0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.
        0.   0.   0.   0.   0.   0.   0.   0.]
     [160. 161. 162. 163. 164. 165. 166. 167. 168. 169. 170. 171. 172. 173.
      174. 175. 176. 177. 178. 179. 180. 181. 182. 183. 184. 185. 186. 187.
      188. 189. 190. 191.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.
        0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.
        0.   0.   0.   0.   0.   0.   0.   0.]
     [192. 193. 194. 195. 196. 197. 198. 199. 200. 201. 202. 203. 204. 205.
      206. 207. 208. 209. 210. 211. 212. 213. 214. 215. 216. 217. 218. 219.
      220. 221. 222. 223.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.
        0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.
        0.   0.   0.   0.   0.   0.   0.   0.]
     [224. 225. 226. 227. 228. 229. 230. 231. 232. 233. 234. 235. 236. 237.
      238. 239. 240. 241. 242. 243. 244. 245. 246. 247. 248. 249. 250. 251.
      252. 253. 254. 255.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.
        0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.
        0.   0.   0.   0.   0.   0.   0.   0.]
     [256. 257. 258. 259. 260. 261. 262. 263. 264. 265. 266. 267. 268. 269.
      270. 271. 272. 273. 274. 275. 276. 277. 278. 279. 280. 281. 282. 283.
      284. 285. 286. 287.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.
        0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.
        0.   0.   0.   0.   0.   0.   0.   0.]
     [288. 289. 290. 291. 292. 293. 294. 295. 296. 297. 298. 299. 300. 301.
      302. 303. 304. 305. 306. 307. 308. 309. 310. 311. 312. 313. 314. 315.
      316. 317. 318. 319.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.
        0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.
        0.   0.   0.   0.   0.   0.   0.   0.]]