APIs
TF Adapter provides APIs for users to develop training or online inference scripts based on the deep learning framework TensorFlow 2.6.5.

API path: {install_path}/python/site-packages/npu_device
API |
Description |
|---|---|
Registers an NPU device, used in conjunction with as_default to set the NPU as the default device. |
|
Returns a global singleton configuration object for initializing an NPU device. By modifying the options of the global singleton object, you can control the initialization options of the NPU device. This API must be called before the npu.open API call. |
|
Performs aggregation operation between workers in distributed NPU training. |
|
Synchronizes variables between workers in distributed NPU training. |
|
Adds the AllReduce operation of the NPU to aggregate the gradients, and then updates the gradients. This API applies only to distributed training. |
|
Shards the dataset and global batch size for workers in distributed NPU training. |
|
Specifies the operators that preserve the original precision. If the operator precision in an original network model is not supported by the Ascend AI Processor, the system automatically uses the high precision supported by the operators for compute. |
|
Sets the number of iterations (or steps) per loop offloaded to the NPU. |
|
When the overflow/underflow mode of floating-point computation is saturation mode, the overflow/underflow computation on the NPU may not output Inf or NaN. Therefore, you should replace LossScaleOptimizer in the script with NpuLossScaleOptimizer, to mask the differences in overflow/underflow detection. |
|
Computes the Gaussian Error Linear Unit (GELU) activation function. Each input tensor is multiplied by one P(X <= x), where P(X) follows N (0, 1). |
|
Sets the process-level overflow mode for floating-point compute. Two overflow modes are supported: saturation mode and Inf/NaN mode.
|