Function: load_from_file_with_mem

C Prototype

aclError aclmdlLoadFromFileWithMem(const char *modelPath, uint32_t *modelId, void *workPtr, size_t workSize, void *weightPtr, size_t weightSize)

Python Function

model_id, ret = acl.mdl.load_from_file_with_mem(model_path, work_ptr, work_size, weight_ptr, weight_size)

Function Usage

Loads an offline model adapted to Ascend AI Processor from a file. The model workspace is managed by the user.

Returns the model ID after the model is loaded. The model ID is used for model identification in subsequent operations.

Input Description

model_path: str, path for storing the offline model file. The file name is contained in the path. The user who runs the app must have the permission to access the directory.

The .om file is an offline model adapted to Ascend AI Processor.

NOTE:

For details about how to obtain the .om file, see Model Building.

work_ptr: int, pointer address of the working memory (for storing model input and output data) required by the model on the device. The memory is managed by the user and cannot be freed during model execution. If 0 is passed for this parameter, the system manages the memory.

NOTE:

In the event where the memory is managed by the user, if multiple models are executed in serial, the models can share a workspace. However, it is the user's responsibility to guarantee the serial execution sequence of the models and the workspace size (the same as the total size of the workspaces needed by all the models). Refer to the following description to ensure serial execution:

  • For synchronous model execution, add a lock to ensure that tasks are executed in serial.
  • For asynchronous model execution, use a single stream to ensure that tasks are executed in serial.

work_size: int, size of the workspace required by the model, in bytes. This parameter is invalid when work_ptr is set to 0.

weight_ptr: int, pointer address of the model weight memory (for storing weight data) on the device. The memory is managed by the user and cannot be freed during model execution. If 0 is passed for this parameter, the system manages the memory.

NOTE:

When the user-managed weight memory is used, in multi-thread scenarios, if a model is loaded once in each thread, the weight_ptr sharing mode can be selected because the weight_ptr memory is read-only during inference.

Note that weight_ptr cannot be freed when the sharing is in progress.

weight_size: int, size of the weight memory required by the model, in bytes. This parameter is invalid when weight_ptr is set to 0.

Return Value

model_id: int, model ID generated after the model is loaded.

ret: int, error code.

Restrictions

The operations of loading, executing, and unloading a model must be performed in the same context. For details about how to create a context, see acl.rt.set_device and acl.rt.create_context.

API

pyACL also provides the acl.mdl.set_config_opt and acl.mdl.load_with_config APIs for model loading. The caller needs to set the attributes in the configuration object passed to the API call to decide how the model will be loaded and who will manage the memory.

Reference

For details about the API call sequence, see Loading a Model.