Distributed RPC Framework

若API未标明支持情况,则代表该API的支持情况待验证。

API名称

是否支持

限制与说明

torch.distributed.rpc.init_rpc

NPU设备启用rpc时需要在init_rpc中进行特定的设置:

  • backend绑定rpc.backend_registry.BackendType.NPU_TENSORPIPE;
  • options绑定NPUTensorPipeRpcBackendOptions,需要from torch_npu.distributed.rpc.options import NPUTensorPipeRpcBackendOptions并设置option选项,参数格式和原版TensorPipeRpcBackendOptions相同。

torch.distributed.rpc.rpc_sync

  

torch.distributed.rpc.rpc_async

  

torch.distributed.rpc.remote

  

torch.distributed.rpc.get_worker_info

  

torch.distributed.rpc.shutdown

  

torch.distributed.rpc.WorkerInfo

  

torch.distributed.rpc.WorkerInfo.id

  

torch.distributed.rpc.WorkerInfo.name

  

torch.distributed.rpc.functions.async_execution

  

torch.distributed.rpc.backend_registry.BackendType.TENSORPIPE

建议使用已适配的torch.distributed.rpc.backend_registry.BackendType.NPU_TENSORPIPE

torch.distributed.rpc.backend_registry.BackendType.NPU_TENSORPIPE

  

torch.distributed.rpc.TensorPipeRpcBackendOptions

建议使用已适配的torch.distributed.rpc.NPUTensorPipeRpcBackendOptions

torch.distributed.rpc.TensorPipeRpcBackendOptions.init_method

建议使用已适配的torch.distributed.rpc.NPUTensorPipeRpcBackendOptions

torch.distributed.rpc.TensorPipeRpcBackendOptions.rpc_timeout

建议使用已适配的torch.distributed.rpc.NPUTensorPipeRpcBackendOptions

torch.distributed.rpc.NPUTensorPipeRpcBackendOptions

  

torch.distributed.rpc.NPUTensorPipeRpcBackendOptions.device_maps

  

torch.distributed.rpc.NPUTensorPipeRpcBackendOptions.devices

  

torch.distributed.rpc.NPUTensorPipeRpcBackendOptions.init_method

  

torch.distributed.rpc.NPUTensorPipeRpcBackendOptions.num_worker_threads

  

torch.distributed.rpc.NPUTensorPipeRpcBackendOptions.rpc_timeout

  

torch.distributed.rpc.NPUTensorPipeRpcBackendOptions.set_device_map

  

torch.distributed.rpc.NPUTensorPipeRpcBackendOptions.set_devices

  

torch.distributed.rpc.RRef

  

torch.distributed.rpc.PyRRef

  

torch.distributed.nn.api.remote_module.RemoteModule

     

torch.distributed.nn.api.remote_module.RemoteModule.get_module_rref

     

torch.distributed.nn.api.remote_module.RemoteModule.remote_parameters

     

torch.distributed.autograd.backward

     

torch.distributed.autograd.context

     

torch.distributed.autograd.get_gradients