vlrelu

Description

Computes Leaky ReLU on the input tensor element-wise. If an element is greater than or equal to 0, the element is retained; otherwise, the element is multiplied by the alpha constant.

Prototype

vlrelu(raw_tensor, alpha=0)

Parameters

  • raw_tensor: a tvm.tensor for the input tensor
  • alpha: a scalar for the input
  • The following data types are supported:

    Atlas 200/300/500 Inference Product: supports float16, float32, and int32.

    Atlas Training Series Product: supports float16, float32, and int32.

Applicability

Atlas 200/300/500 Inference Product

Atlas Training Series Product

Returns

res_tensor: a tvm.tensor for the result tensor

Restrictions

None

Example

from tbe import tvm
from tbe import dsl
shape = (1024,1024)
input_dtype = "float16"
data = tvm.placeholder(shape, name="data", dtype=input_dtype)
res = dsl.vlrelu(data, alpha=0.01)