- 当前仅支持调用torch.save和torch.load对torch.export路线分图后的模型进行保存和加载,由于torch.load与torch.fx.GraphModule对象不完全适配,分图后的模型调用torch.load时存在失败的风险。
- 以下代码将手动把tanh算子fallback到CPU上执行。如需fallback到torch_npu执行,请参见配套Torch_NPU使用安装torch_npu环境,同时在如下示例代码中import mindietorch之前import torch_npu。
示例代码如下所示:
import torch
import torch.nn as nn
import mindietorch
class Test(nn.Module):
def forward(self, x):
x = torch.ops.aten.relu.default(x)
x= torch.ops.aten.tanh.default(x)
out = torch.ops.aten.sigmoid.default(x)
return out
shape = (2, 2)
input = torch.randn(shape)
inputs_info = [mindietorch.Input(shape)]
model = Test()
compiled_model = mindietorch.compile(model, ir="dynamo",
inputs=inputs_info,
torch_executed_ops=[torch.ops.aten.tanh.default],
min_block_size=1)
device_id = 0
mindietorch.set_device(device_id)
npu_input = input.to("npu")
infer_ret = compiled_model.forward(npu_input)[0].to("cpu")