昇腾社区首页
中文
注册

torch.cuda

若API“是否支持”“是”“限制与说明”“-”,说明此API和原生API支持度保持一致。

在使用支持的cuda接口时,需要将API名称中的cuda变换为NPU形式才能使用:torch.cuda.***变换为torch_npu.npu.***。举例如下:
torch.cuda.current_device --> torch_npu.npu.current_device

API名称

NPU形式名称

是否支持

限制与说明

torch.cuda.can_device_access_peer

torch_npu.npu.can_device_access_peer

-

torch.cuda.current_blas_handle

torch_npu.npu.current_blas_handle

-

torch.cuda.current_device

torch_npu.npu.current_device

-

torch.cuda.device_count

torch_npu.npu.device_count

-

torch.cuda.device_of

torch_npu.npu.device_of

-

torch.cuda.get_device_name

torch_npu.npu.get_device_name

-

torch.cuda.get_device_properties

torch_npu.npu.get_device_properties

返回数据结构不支持major 、minor 、multi_processor_count字段

torch.cuda.get_sync_debug_mode

torch_npu.npu.get_sync_debug_mode

-

torch.cuda.init

torch_npu.npu.init

-

torch.cuda.is_available

torch_npu.npu.is_available

-

torch.cuda.is_initialized

torch_npu.npu.is_initialized

-

torch.cuda.set_device

torch_npu.npu.set_device

-

torch.cuda.set_stream

torch_npu.npu.set_stream

-

torch.cuda.set_sync_debug_mode

torch_npu.npu.set_sync_debug_mode

-

torch.cuda.stream

torch_npu.npu.stream

-

torch.cuda.utilization

torch_npu.npu.utilization

-

torch.cuda.get_rng_state

torch_npu.npu.get_rng_state

-

torch.cuda.set_rng_state

torch_npu.npu.set_rng_state

-

torch.cuda.set_rng_state_all

torch_npu.npu.set_rng_state_all

-

torch.cuda.manual_seed

torch_npu.npu.manual_seed

-

torch.cuda.manual_seed_all

torch_npu.npu.manual_seed_all

-

torch.cuda.seed

torch_npu.npu.seed

-

torch.cuda.seed_all

torch_npu.npu.seed_all

-

torch.cuda.initial_seed

torch_npu.npu.initial_seed

-

torch.cuda.Stream

torch_npu.npu.Stream

-

torch.cuda.Stream.wait_stream

torch_npu.npu.Stream.wait_stream

-

torch.cuda.Event

torch_npu.npu.Event

-

torch.cuda.Event.elapsed_time

torch_npu.npu.Event.elapsed_time

-

torch.cuda.Event.query

torch_npu.npu.Event.query

-

torch.cuda.Event.wait

torch_npu.npu.Event.wait

-

torch.cuda.empty_cache

torch_npu.npu.empty_cache

-

torch.cuda.memory_stats

torch_npu.npu.memory_stats

-

torch.cuda.memory_summary

torch_npu.npu.memory_summary

-

torch.cuda.memory_allocated

torch_npu.npu.memory_allocated

-

torch.cuda.max_memory_allocated

torch_npu.npu.max_memory_allocated

-

torch.cuda.reset_max_memory_allocated

torch_npu.npu.reset_max_memory_allocated

-

torch.cuda.memory_reserved

torch_npu.npu.memory_reserved

-

torch.cuda.max_memory_reserved

torch_npu.npu.max_memory_reserved

-

torch.cuda.set_per_process_memory_fraction

torch_npu.npu.set_per_process_memory_fraction

-

torch.cuda.memory_cached

torch_npu.npu.memory_cached

-

torch.cuda.max_memory_cached

torch_npu.npu.max_memory_cached

-

torch.cuda.reset_max_memory_cached

torch_npu.npu.reset_max_memory_cached

-

torch.cuda.reset_peak_memory_stats

torch_npu.npu.reset_peak_memory_stats

-

torch.cuda.caching_allocator_alloc

torch_npu.npu.caching_allocator_alloc

-

torch.cuda.caching_allocator_delete

torch_npu.npu.caching_allocator_delete

-

torch.cuda.StreamContext

-

-

torch.cuda.current_stream

torch_npu.npu.current_stream

-

torch.cuda.default_stream

torch_npu.npu.default_stream

-

torch.cuda.device

torch_npu.npu.device

-

torch.cuda.get_device_capability

-

NPU设备无对应概念

torch.cuda.memory_usage

-

-

torch.cuda.synchronize

torch_npu.npu.synchronize

-

torch.cuda.comm.scatter

-

-

torch.cuda.comm.gather

-

-

torch.cuda.mem_get_info

torch_npu.npu.mem_get_info

-

torch.cuda.get_allocator_backend

torch_npu.npu.get_allocator_backend

-

torch.cuda.CUDAPluggableAllocator

torch.npu.npu.NPUPluggableAllocator

该接口涉及高危操作,使用请参考torch.npu.npu.NPUPluggableAllocator

torch.cuda.change_current_allocator

torch.npu.npu.change_current_allocator

该接口涉及高危操作,使用请参考torch.npu.npu.change_current_allocator