mxpi_modelinfer

This plugin will be discarded. Use the mxpi_tensorinfer plugin.

Function

Classifies or detects objects.

Constraints

Currently, only the inference model with a single tensor input (image data) is supported.

Plugin Base Class (Factory)

mxpi_modelinfer

Input/Output

Input: buffer (data type: MxpiBuffer) and metadata (data type: MxpiVisionList)

Output: buffer (data type: MxpiBuffer), metadata (data type: MxpiObjectList, MxpiClassList, MxpiAttributeList, MxpiFeatureVectorList, and MxpiTensorPackageList (when post-processing is not used))

Port Format (Caps)

Static input: {"image/yuv"}

Static output: {"metadata/object", "metadata/class", "metadata/attribute", "metadata/feature-vector", "metadata/tensor"}

Property

For details, see Table 1.

Table 1 mxpi_modelinfer plugin properties

Property Name

Description

Mandatory or Not

Modifiable or Not

modelPath

OM file path of the inference model The maximum size of a model is 4 GB. The owner of the model must be the current user, and the permission cannot be higher than 640.

Yes

Yes

postProcessConfigPath

Path of the postprocessing configuration file

No

Yes

postProcessConfigContent

Post-processing configuration

No

Yes

labelPath

Path of the postprocessing class label

No

Yes

parentName

Index of the input data (generally the name of the upstream element). The function is the same as that of dataSource, but dataSource is recommended. This property will be deleted in later versions.

Do not use it.

Yes

dataSource

Index of the input data (generally the name of the upstream element). The default value is the key value of the output port of the upstream plugin.

Recommended

Yes

postProcessLibPath

Path of the .so file of the postprocessing DLL. If this property is not specified, the model inference result is directly written to the metadata MxpiTensorPackageList and output to the position specified by outputDeviceId.

No

Yes

deviceId

Ascend device ID, which is specified by the deviceId property in the stream_config field. You do not need to set the ID.

No

Yes

tensorFormat

If it is set to 0, NHWC is used. If it is set to 1, NCHW is used. The default value is 0.

No

Yes

pictureCropName

Specifies whether to map the coordinates of the model inference to the source image before cropping. If this property is not set, the image is not mapped to the source image by default. To map the image, set the property to the name of the image cropping plugin.

No

Yes

waitingTime

Maximum waiting time that the multi-batch model allows to build a batch. If the actual waiting time exceeds the value of this parameter, the system stops waiting and automatically performs inference. The default value is 5000 ms.

No

Yes

outputDeviceId

If the postprocessing .so file is not used, the data is copied from the memory to the position specified by outputDeviceId. If the data needs to be copied to the host, set this parameter to -1. If the data needs to be copied to the device, set this parameter to the value of deviceId in the stream_config field.

No

Yes

dynamicStrategy

Policy used to select a proper batch size in dynamic batch inference. The default value is Nearest.

  • Nearest: Use the batch size that is closest to the absolute value of the difference between the number of cached images. If the absolute values are the same, use the larger one.
  • Upper: Use the minimum batch size that is greater than or equal to the number of cached images.
  • Lower: Use the maximum batch size that is less than or equal to the number of cached images.

The maximum batch size is 128. Set the number of images to be inferred based on the model batch size. If the input image quantity exceeds the maximum value, extra images will not be inferred.

No

Yes

checkImageAlignInfo

Height and width check for the image alignment. The value is of the string type. This parameter is set to on by default, indicating that the verification is required. To disable this function, set it to off.

No

Yes

  • parentName is used to be compatible with earlier versions. You are advised to use dataSource in later versions. The usage methods of dataSource and parentName are the same. You can select either one of them.
  • postProcessConfigContent and postProcessConfigPath are used to obtain the postprocessing configuration content. The difference lies in whether the content is directly written or provided in a file. You can select either one of them.

Model Post-processing Description

For details, see Model Post-processing (Old Framework).