TorchScript路线分图特性

以下代码将手动把relu算子fallback到CPU上执行。如需fallback到torch_npu执行,请参见配套Torch_NPU使用安装torch_npu环境,同时在如下示例代码中import mindietorch之前import torch_npu。
示例代码如下所示:
# 1.导入mindietorch框架 import torch import torch.nn as nn import mindietorch # 2.模型导出 class Test(nn.Module): # 定义模型Test def __init__(self): super().__init__() self.conv = nn.Conv2d(3, 8, 3, padding=1) self.pool = nn.MaxPool2d(2, 2) self.relu = nn.ReLU() def forward(self, x): x = self.conv(x) x = self.pool(x) out0 = self.relu(x) return out0 input = torch.randn([1, 3, 8, 16]) inputs_info = [mindietorch.Input(shape=(1, 3, 8, 16))] model = Test() traced_model = torch.jit.trace(model, input) # 模型Test模型导出 # 3.模型编译,具体参考python接口的函数方法中的mindietorch.compile。 compiled_model = mindietorch.compile(traced_model, ir="ts", inputs=inputs_info, torch_executed_ops=["aten::relu"]) # 4.模型推理 device_id = 0 mindietorch.set_device(device_id) # 设置使用device 0设备 npu_input = input.to("npu") infer_ret = compiled_model.forward(npu_input).to("cpu") # 模型加速推理
父主题: MindIE Torch分图特性