Distributed RPC Framework

若API“是否支持”“是”“限制与说明”“-”,说明此API和原生API支持度保持一致。

API名称

是否支持

限制与说明

torch.distributed.rpc.init_rpc

NPU设备启用rpc时需要在init_rpc中进行特定的设置:

backend绑定rpc.backend_registry.BackendType.NPU_TENSORPIPE;

options绑定NPUTensorPipeRpcBackendOptions,

需要from torch_npu.distributed.rpc.options import NPUTensorPipeRpcBackendOptions并设置option选项,参数格式和原版TensorPipeRpcBackendOptions相同。

torch.distributed.rpc.rpc_sync

-

torch.distributed.rpc.rpc_async

-

torch.distributed.rpc.remote

-

torch.distributed.rpc.get_worker_info

-

torch.distributed.rpc.shutdown

-

torch.distributed.rpc.WorkerInfo

-

torch.distributed.rpc.functions.async_execution

-

torch.distributed.rpc.BackendType

-

torch.distributed.rpc.RpcBackendOptions

-

torch.distributed.rpc.TensorPipeRpcBackendOptions

建议使用已适配的torch.distributed.rpc.NPUTensorPipeRpcBackendOptions

torch.distributed.rpc.TensorPipeRpcBackendOptions.set_device_map

建议使用已适配的torch.distributed.rpc.NPUTensorPipeRpcBackendOptions.set_device_map

torch.distributed.rpc.TensorPipeRpcBackendOptions.set_devices

建议使用已适配的torch.distributed.rpc.NPUTensorPipeRpcBackendOptions.set_devices

torch.distributed.rpc.RRef

-

torch.distributed.rpc.PyRRef

-

torch.distributed.nn.api.remote_module.RemoteModule

-

torch.distributed.nn.api.remote_module.RemoteModule.get_module_rref

-

torch.distributed.nn.api.remote_module.RemoteModule.remote_parameters

-

torch.distributed.autograd.backward

-

torch.distributed.autograd.context

-

torch.distributed.autograd.get_gradients

-