Function: load_from_file

C Prototype

aclError aclmdlLoadFromFile(const char *modelPath, uint32_t *modelId)

Python Function

model_id, ret = acl.mdl.load_from_file(model_path)

Function Usage

Loads an offline model (offline model adapted to Ascend AI Processor) from a file. The memory is managed by the system.

Returns the model ID after the model is loaded. The model ID is used for model identification in subsequent operations.

Input Description

model_path: str, path for storing the offline model file. The file name is contained in the path. The user who runs the app must have the permission to access the directory.

The offline model file is an offline model (.om file) adapted to the Ascend AI Processor.

NOTE:
  • For details about how to obtain the .om file, see Model Building.
  • When you use ATC to generate an .om file whose size is limited, set --external_weight to 1, indicating that the weight of the Const/Constant node on the original network is saved in a separate file in the weight directory. Ensure that the weight directory is at the same level as the .om file so that pyACL can automatically search for the weight file in the weight directory when you call this API to load the .om file. If the weight directory is not placed in the correct location, the weight file may fail to be loaded.

Return Value

model_id: int, pointer address of the model ID generated after the system loads the model.

ret: int, error code.

Restrictions

  • Before loading the model file, check whether the memory space is sufficient based on the file size. If the memory space is insufficient, the application will be abnormal.
  • The operations of loading, executing, and unloading a model must be performed in the same context. For details about how to create a context, see acl.rt.set_device and acl.rt.create_context.

API

pyACL also provides the acl.mdl.set_config_opt and acl.mdl.load_with_config APIs for model loading. The caller needs to set the attributes in the configuration object passed to the API call to decide how the model will be loaded and who will manage the memory.

Reference

For details about the API call sequence, see Loading a Model.