torch.autograd

若API“是否支持”“是”“限制与说明”“-”,说明此API和原生API支持度保持一致。

API名称

是否支持

限制与说明

torch.autograd.Function

-

torch.autograd.profiler.profile

采集NPU上的Profiling数据时,“use_device”需设置为“npu”

torch.autograd.profiler.emit_nvtx

-

torch.autograd.profiler.emit_itt

-

torch.autograd.detect_anomaly

-

torch.autograd.set_detect_anomaly

-

torch.autograd.graph.saved_tensors_hooks

-

torch.autograd.graph.save_on_cpu

-

torch.autograd.graph.disable_saved_tensors_hooks

-

torch.autograd.graph.register_multi_grad_hook

-

torch.autograd.graph.allow_mutation_on_saved_tensors

支持fp32

torch.autograd.backward

支持bf16,fp16,fp32,fp64

不支持稀疏张量

torch.autograd.grad

-

torch.autograd.forward_ad.dual_level

-

torch.autograd.forward_ad.make_dual

支持fp32

torch.autograd.forward_ad.unpack_dual

支持fp32

torch.autograd.functional.jacobian

支持fp32

torch.autograd.functional.hessian

支持fp32

torch.autograd.functional.vjp

支持fp32

torch.autograd.functional.jvp

支持fp32

torch.autograd.functional.vhp

支持fp32

torch.autograd.functional.hvp

支持fp32

Function.forward

-

Function.backward

-

Function.jvp

-

Function.vmap

-

FunctionCtx.mark_dirty

-

FunctionCtx.mark_non_differentiable

-

FunctionCtx.save_for_backward

-

FunctionCtx.set_materialize_grads

-

torch.autograd.gradcheck

-

torch.autograd.gradgradcheck

-

profile.export_chrome_trace

-

profile.key_averages

-

profile.self_cpu_time_total

-

profile.total_average

-

torch.autograd.profiler.load_nvprof

-

Node.name

-

Node.metadata

-

Node.next_functions

-

Node.register_hook

-

Node.register_prehook

-