alltoallv

Description

Sends data (with the customized data size) to all ranks in the collective communicator and receives data from all ranks.

The default size of the shared buffer between two NPUs is 200 MB, which can be adjusted by using the environment variable HCCL_BUFFSIZE. The unit is MB. The value must be greater than or equal to 1 MB. The default value is 200 MB. On an integrated communication network, each integrated communicator occupies a buffer of the HCCL_BUFFSIZE size. You can adjust the HCCL_BUFFSIZE size based on the communication data size and service model data size to improve network execution performance. For example:
export HCCL_BUFFSIZE=2048

Prototype

def all_to_all_v(send_data, send_counts, send_displacements, recv_counts, recv_displacements, group="hccl_world_group")

Parameters

Parameter

Input/Output

Description

send_data

Input

Data to be sent.

TensorFlow tensor type.

Atlas Training Series Product : The supported data types are int8, uint8, int16, uint16, int32, uint32, int64, uint64, float16, float32, and float64.

send_counts

Input

Size of sent data. send_counts[i] indicates the number of data pieces sent by the current rank to rank i. The basic unit is the number of bytes of the send_data data type.

For example, if the data type of send_data is int32 or send_counts[0]=1,send_count[1]=2, the current rank sends one int32 data segment to rank 0 and two int32 data segments to rank 1.

TensorFlow tensor, with the data type of int64.

send_displacements

Input

Offset of the sent data. send_displacements[i] indicates the offset of the data block sent from the current rank to rank i relative to send_data. The basic unit is the number of bytes of the send_data data type.

Example:

  • The data type of send_data is int32.
  • send_counts[0]=1,send_counts[1]=2
  • send_displacements[0]=0,send_displacements[1]=1

The current rank sends the first int32 data segment in send_data to rank 0, and sends the second and third int32 data segments in send_data to rank 1.

TensorFlow tensor, with the data type of int64.

recv_counts

Input

Amount of received data. recv_counts[i] indicates the amount of data received by the current rank from rank i. The usage is similar to that of send_counts.

TensorFlow tensor type. Must be the int64 type.

recv_displacements

Input

Offset of the received data. recv_displacements[i] indicates the offset of the data block sent from the current rank to rank i relative to recv_data. The basic unit is the number of bytes of recv_data_type. The usage is similar to that of send_displacements.

TensorFlow tensor type. Must be the int64 type.

group

Input

Group name, which can be a user-defined value or hccl_world_group.

A string containing a maximum of 128 bytes, including the end character.

Returns

recv_data: The result tensor after the all_to_all_v operation is performed on the input tensor.

Constraints

  1. The caller rank must be within the range defined by the group argument passed to this API call. Otherwise, the API call fails.
  2. For the Atlas Training Series Product , the AlltoAllV communicators must meet the following requirement:

    In a cluster network, the communicators of 1p and 2p in a single server must be in the same cluster (with devices 0–3 and devices 4–7 each belonging to a separate cluster). In the communicators of 4p and 8p in a single server and multiple servers, the ranks must be based on the clusters, and the selected clusters in servers must be consistent.

  3. The performance of the AlltoAllV operation is related to the size of the buffer for storing shared data between NPUs. When the communication data size exceeds the buffer size, the performance deteriorates significantly. If the AlltoAllV communication data size in the service is large, you are advised to increase the buffer size appropriately by setting environment variable HCCL_BUFFSIZE to improve the communication performance.
  4. This API cannot be used in non-cluster scenarios.

Applicability

Atlas Training Series Product

Example

The following is only a code snippet and cannot be executed. For details about how to call the HCCL Python APIs to perform collective communication, see Sample Code.

1
2
3
4
5
6
7
from npu_bridge.npu_init import *
send_data_tensor = tf.random_uniform((1, 3), minval=1, maxval=10, dtype=tf.float32)
send_counts_tensor = tf.constant([3,3],dtype=tf.int64)
send_displacements_tensor = tf.constant([0,0],dtype=tf.int64)
recv_counts_tensor = tf.constant([3,3],dtype=tf.int64)
recv_displacements_tensor = tf.constant([0,0],dtype=tf.int64)
result = hccl_ops.all_to_all_v(send_data_tensor,send_counts_tensor,send_displacements_tensor,recv_counts_tensor,recv_displacements_tensor)