Model Post-processing Description

Model postprocessing is an operation that corresponds to a model. In the SDK, the main task is to process the inference result tensor transferred by the model inference plugin. For example, in an object detection job, deduplication, sorting, and filtering need to be performed on bounding boxes. Finally, the processing result is written into the class object of the object information, and is transferred back to the post-processing plugin of object detection for writing metadata and transferring the metadata to the downstream plugin. Currently, corresponding post-processing dynamic libraries have been developed for all models supported by the SDK. For details, see Table 1. Select an existing post-processing plugin or develop one as required. For details, see Post-processing Class Development Procedure.

Table 1 Supported models and post-processing dynamic libraries

Supported Model

Post-processing Class Name

Post-processing Dynamic Library Used by the Inference Plugin

YOLOv3

Yolov3PostProcess

modelpostprocessors/libyolov3postprocess.so

YOLOv3-tiny

ResNet-50

Resnet50PostProcess

modelpostprocessors/libresnet50postprocess.so

Faster R-CNN

FasterRcnnPostProcess

modelpostprocessors/libfasterrcnnpostprocess.so

SSD-VGG16

Ssdvgg16PostProcess

modelpostprocessors/libssdvgg16postprocess.so

SSD MobileNet v1 FPN

SsdMobilenetv1FpnPostProcess

modelpostprocessors/libssdmobilenetv1fpnpostprocess.so

CRNN

CrnnPostProcessor

modelpostprocessors/libcrnnpostprocess.so

Generally, postprocessing requires a configuration file and a label file.

  • In the label file, enter the class names line by line based on the class ID sequence (the lines starting with # are not read). The following is an example:
    # This is modified from https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a
    unknown type
    tench, Tinca tinca
    goldfish, Carassius auratus
    great white shark, white shark, man-eater, man-eating shark, Carcharodon carchariasB
    tiger shark, Galeocerdo cuvieri
    ...
  • The configuration parameters required by each model are different. You can define and add configuration parameters as required. In the post-processing base class, CHECK_MODEL is read. If the value is true, the model output tensor shape is verified and incompatible models are intercepted.If the value is false, model verification is skipped. For details about the model output tensor shapes supported by the SDK and parameters in the configuration file, see Existing Supported Models.