Function: execute_async

C Prototype

aclError aclmdlExecuteAsync(uint32_t modelId, const aclmdlDataset *input, aclmdlDataset *output, aclrtStream stream)

Python Function

ret = acl.mdl.execute_async(model_id, input, output, stream)

Function Usage

Executes a model for inference. This API is asynchronous.

Input Description

model_id: int, ID of the model to be inferred.

You can obtain the model ID after the model is successfully loaded by calling the following APIs:

input: int, pointer address of the input data for model inference. For details, see aclmdlDataset.

output: int, pointer address of the output data for model inference. For details, see aclmdlDataset.

stream: int, pointer address of the created stream. To specify a new stream, you can create and obtain the pointer address of the stream by calling acl.rt.create_stream.

Return Value

ret: int, error code.

Restrictions

  • This API is asynchronous. The API call delivers a task rather than executes a task. After this API is called, call the synchronization API (for example, acl.rt.synchronize_stream) to ensure that the task is complete.
  • For models with the same model_id, acl.mdl.execute_async cannot be called to perform model inference in the multi-stream concurrency scenario.
  • If the same model_id is shared by multiple threads due to service requirements, locks must be added between user threads to ensure that operations of refreshing the input and output memory and executing inference are performed continuously. For example:
    // API call sequence of thread A:
    lock(handle1) -> acl.rt.memcpy_async(stream1) (Refresh the input and output memory) - > acl.mdl.execute_async(modelId1,stream1) (Execute inference) - > unlock(handle1)
    
    // API call sequence of thread B:
    lock(handle1) -> acl.rt.memcpy_async(stream1) (Refresh the input and output memory) - > acl.mdl.execute_async(modelId1,stream1) (Execute inference) - > unlock(handle1)
  • The operations of loading, executing, and unloading a model must be performed in the same context. For details about how to create a context, see acl.rt.create_context.
  • You can call acl.rt.malloc, acl.rt.malloc_host, or acl.rt.malloc_cachedacl.media.dvpp_malloc or acl.himpi.dvpp_malloc to allocate the memory for storing the model input and output data.

    The acl.media.dvpp_malloc and acl.himpi.dvpp_malloc APIs are dedicated memory allocation APIs for media data processing. To reduce copy, the output of media data processing is used as the input of model inference to implement memory overcommitment.

    Hardware has memory alignment and supplement requirements. If you use one of these APIs to allocate a large memory block, and divide and manage the memory, the alignment and supplement restrictions of the corresponding API must be met. For details, see Secondary Memory Allocation.

Reference