昇腾社区首页
中文
注册

LayerNormParam

属性

类型

默认值

描述

layer_type

torch_atb.LayerNormParam.LayerNormType

torch_atb.LayerNormParam.LayerNormType.LAYER_NORM_UNDEFINED

此默认类型不可用,用户需配置此项参数。

norm_param

torch_atb.LayerNormParam.NormParam

-

-

pre_norm_param

torch_atb.LayerNormParam.PreNormParam

-

-

post_norm_param

torch_atb.LayerNormParam.PostNormParam

-

-

LayerNormParam.LayerNormType

枚举项:

  • LAYER_NORM_UNDEFINED
  • LAYER_NORM_NORM
  • LAYER_NORM_PRENORM
  • LAYER_NORM_POSTNORM
  • LAYER_NORM_MAX

LayerNormParam.NormParam

属性

类型

默认值

描述

quant_type

torch_atb.QuantType

torch_atb.QuantType.QUANT_UNQUANT

表示不进行量化操作。

epsilon

float

1e-5

-

begin_norm_axis

int

0

-

begin_params_axis

int

0

-

dynamic_quant_type

torch_atb.DynamicQuantType

torch_atb.DynamicQuantType.DYNAMIC_QUANT_UNDEFINED

-

LayerNormParam.PreNormParam

属性

类型

默认值

描述

quant_type

torch_atb.QuantType

torch_atb.QuantType.QUANT_UNQUANT

表示不进行量化操作。

epsilon

float

1e-5

-

op_mode

int

0

-

zoom_scale_value

float

1.0

-

LayerNormParam.PostNormParam

属性

类型

默认值

描述

quant_type

torch_atb.QuantType

torch_atb.QuantType.QUANT_UNQUANT

表示不进行量化操作。

epsilon

float

1e-5

-

op_mode

int

0

-

zoom_scale_value

float

1.0

-

调用示例

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
import torch
import torch_atb  
import numbers

def layernorm():
    eps=1e-05
    batch, sentence_length, embedding_dim = 20, 5, 10
    embedding = torch.randn(batch, sentence_length, embedding_dim)
    embedding_npu = embedding.npu()
    normalized_shape = (embedding_dim,) if isinstance(embedding_dim, numbers.Integral) else tuple(embedding_dim)
    weight = torch.ones(normalized_shape, dtype=torch.float32).npu()
    bias = torch.zeros(normalized_shape, dtype=torch.float32).npu()
    print("embedding: ", embedding_npu)
    print("weight: ", weight)
    print("bias: ", bias)
    layer_norm_param = torch_atb.LayerNormParam(layer_type = torch_atb.LayerNormParam.LayerNormType.LAYER_NORM_NORM)
    layer_norm_param.norm_param.epsilon = eps
    layer_norm_param.norm_param.begin_norm_axis = len(normalized_shape) * -1
    layernorm = torch_atb.Operation(layer_norm_param)

    def layernorm_run():
        layernorm_outputs = layernorm.forward([embedding_npu, weight, bias])
        return layernorm_outputs

    outputs = layernorm_run()
    print("outputs: ", outputs)

if __name__ == "__main__":
    layernorm()