Tensor
Description
Defines a Tensor variable.
Prototype
Tensor(dtype, shape, scope, name, enable_buffer_reuse=False, is_workspace=False, is_atomic_add=False,
max_mem_size=None, init_value=None, start_addr=None)
Parameters
Parameter |
Input/Output |
Description |
|---|---|---|
dtype |
Input |
Data type of the Tensor object, such as: uint8, int8, uint16, int16, float16, uint32, int32, float32, uint64, int64 |
shape |
Input |
Shape of the Tensor object. A list or tuple of immediates (of type int or float), Scalars (of type int or uint), or Exprs (of type int or uint). Immediates of the float type are not recommended. Note the following restrictions on the shape:
|
scope |
Input |
Scope of the Tensor object, that is, buffer space where the Tensor object is located:
scope_gm indicates the external storage, and the rest indicate the internal storage. The corresponding calculation can be performed only after the data in the external storage is transferred to the internal storage. |
name |
Input |
A string specifying the name of the Tensor object. Only digits, letters, and underscores (_) are allowed. However, the name must not start with a digit. The tensor name must be unique. NOTE:
When scope is set to scope_gm, the name parameter must not be __fake_tensor. |
enable_buffer_reuse |
Input |
Reserved and not recommended. |
is_workspace |
Input |
A bool. Defaults to False. If it is set to True, the current Tensor is used for storing temporary data only. In this case, scope must be scope_gm and the Tensor must not be included in the input and output tensors. That is, the names of the input and output tensors do not contain the name of the tensor in the workspace. |
is_atomic_add |
Input |
Whether to initialize the Global Memory space. The default value is False.
If the setting is True, scope must be set to scope_gm. NOTE:
This parameter takes effect only when the operator is executed on the network. During network execution, Graph Engine determines whether to initialize the Global Memory space based on this parameter. |
max_mem_size |
Input |
Memory size of the Tensor.
|
init_value |
Input |
A single element or a list/tuple for the Tensor initial value. Note: This parameter is reserved and is not supported in the current version. |
Applicability
Restrictions
- When the total size of the tensors exceeds the capacity of the corresponding buffer type, a build error is reported.
In the following example, the size of data_a is 1025 x 1024 bytes, which is greater than the capacity of L1 Buffer by 1 MB.
import numpy as np import sys from tbe import tik import tvm def buffer_allocate_test6(): tik_instance = tik.Tik() data_a = tik_instance.Tensor("int8", (1025 * 1024,), name="data_a", scope=tik.scope_cbuf) tik_instance.BuildCCE(kernel_name="buffer_allocate_test",inputs=[],outputs=[]) return tik_instance if __name__ == "__main__": tik_instance = buffer_allocate_test6()Build error:
RuntimeError: Applied buffer size(1049600B) more than available buffer size(1048576B).
- If a Tensor is accessed beyond its defined scope, a build error will be reported.In the following example, data_a_l1 is defined only in new_stmt_scope. Beyond the defined scope, an error will be reported when the data_move API is called to access data_a_l1 again.
import numpy as np import sys from tbe import tik import tvm def tensor_outrange_examine_test6(): tik_instance = tik.Tik() data_a = tik_instance.Tensor("float16", (128,), name="data_a", scope=tik.scope_gm) data_b = tik_instance.Tensor("float16", (128,), name="data_b", scope=tik.scope_gm) with tik_instance.new_stmt_scope(): data_a_ub = tik_instance.Tensor("float16", (128,), name="data_a_ub", scope=tik.scope_ubuf) data_a_l1 = tik_instance.Tensor("float16", (128,), name="data_a_l1", scope=tik.scope_cbuf) tik_instance.data_move(data_a_l1, data_a, 0, 1, 128 // 16, 0, 0) tik_instance.data_move(data_a_ub, data_a_l1, 0, 1, 128 // 16, 0, 0) tik_instance.data_move(data_b, data_a_ub, 0, 1, 128 // 16, 0, 0) tik_instance.BuildCCE(kernel_name="tensor_outrange_examine", inputs=[data_a], outputs=[data_b]) return tik_instanceBuild error:
RuntimeError: This tensor is not defined in this scope.
- If a tensor is beyond its defined scope, the buffer can be reused.In the following example, as data_a_ub1 and data_a_ub2 are beyond the defined scopes, the occupied buffer of size 126,976 bytes (62 x 2 x 1024 bytes) can be reused by data_b_ub.
import numpy as np import sys from tbe import tik import tvm def double_buffer_test6(): tik_instance = tik.Tik() data_a = tik_instance.Tensor("int8", (124 *1024,), name="data_a", scope=tik.scope_ubuf) with tik_instance.for_range(0, 2): data_a_ub1 = tik_instance.Tensor("int8", (62 * 1024,), name="data_a_ub1", scope=tik.scope_ubuf) data_a_ub2 = tik_instance.Tensor("int8", (62 * 1024,), name="data_a_ub2", scope=tik.scope_ubuf) data_b_ub = tik_instance.Tensor("int8", (125 * 1024,), name="data_b_ub", scope=tik.scope_ubuf) tik_instance.BuildCCE(kernel_name="tbe_double_buffer_no_loop", inputs=[ ], outputs=[ ]) return tik_instance if __name__ == "__main__": tik_instance = double_buffer_test6()If data_b_ub exceeds the Unified Buffer size, the following error is reported during the build:
RuntimeError: Tensor data_b_ub applies buffer size(128000B) more than available buffer size(126976B).
- For user-defined tensors, the start address of the allocated buffer scope will be aligned according to General Restrictions.
If the total size of a buffer type is exceeded due to address alignment, a build error is reported.
Returns
A Tensor instance.
Example
from tbe import tik
tik_instance = tik.Tik()
data_A = tik_instance.Tensor("float16", (128,), name="data_A", scope=tik.scope_gm)