TorchScript路线分图特性
以下代码将手动把relu算子fallback到CPU上执行。如需fallback到torch_npu执行,请参见配套Torch_NPU使用安装torch_npu环境,同时在如下示例代码中import mindietorch之前import torch_npu。
示例代码如下所示:
import torch
import torch.nn as nn
import mindietorch
class Test(nn.Module):
def __init__(self):
super().__init__()
self.conv = nn.Conv2d(3, 8, 3, padding=1)
self.pool = nn.MaxPool2d(2, 2)
self.relu = nn.ReLU()
def forward(self, x):
x = self.conv(x)
x = self.pool(x)
out0 = self.relu(x)
return out0
input = torch.randn([1, 3, 8, 16])
inputs_info = [mindietorch.Input(shape=(1, 3, 8, 16))]
model = Test()
traced_model = torch.jit.trace(model, input)
compiled_model = mindietorch.compile(traced_model, ir="ts", inputs=inputs_info, torch_executed_ops=["aten::relu"])
device_id = 0
mindietorch.set_device(device_id)
npu_input = input.to("npu")
infer_ret = compiled_model.forward(npu_input).to("cpu")
父主题: MindIE Torch分图特性