Defining sample_scene

sample_scene.py defines the process of configuring the stream service flow, preparing the model, and post-processing the configuration file.

Key Points

  1. Define the function generated by the inference service.

    get_pipeline_name: returns sample_scene_pipeline, the main function of the process orchestration.

    def get_pipeline_name():
         return "sample_scene_pipeline"

    Process orchestration main function: orchestrates process in custom scenarios and invokes the process orchestration main function based on the bound mapping.

    def sample_scene_pipeline(task_name, scene_data, output_path):
        # Map the sample model name to the inherited Deployer. The following uses SSD as an example.
        deployers_map = {
            "ssd_mobilenetv1_fpn": NewSceneDeployer
        }
        model_list = scene_data['model_list']
        # Create a folder.
        model_path = create_scene_dir(task_name, scene_data, output_path)
        # Add a decoding plugin.
        input_plugins = {}
        add_decode_plugins(input_plugins)
        # Add a detection model.
        det_plugins = detection_process(model_list, model_path, deployers_map, input_plugins)
        serial_keys = det_plugins[-1].plugin_name()
        props_dataserialize = {
            "outputDataKeys": serial_keys
        }
        # Add a serialization plugin.
        dataserialize0 = PluginNode("mxpi_dataserialize", props_dataserialize)(det_plugins[-1])
        appsink0 = PluginNode("appsink")(dataserialize0)
    
        f = FunctionalStream('stream', [input_plugins.get('appsrc')], [appsink0])
        f.set_device_id('0')
        f.build()
    
        pipeline = {}
        pipeline_key = f'{scene_data["project_name"]}_{task_name}'
        pipeline[pipeline_key] = json.loads(f.to_json())
        pipeline_file = write_pipeline(pipeline, model_path)
        return pipeline_file
  2. Define a custom scenario class.

    Implement the generate_plugins function by inheriting the deployer in the existing scenario. This function is essential to implement stream service flows.

    If the existing scenario is not inherited, inherit ModelDeployer and implement the generate_deploy_files function. This function is used to prepare models and generate post-processing configuration files.

    The following example defines NewSceneDeployer in the new scenario to implement the stream service flow by inheriting the existing scenario SSDDeployer, connecting the key plugins in the new scenario in serial.
    class NewSceneDeployer(SSDDeployer): # Inherits SSDDeployer in the existing scenario.
        def generate_plugins(self):
            props_imageresize = {
                "dataSource": "mxpi_imagedecoder0",
                "handleMethod": "opencv"
            }
            props_tensorinfer = {
                "dataSource": "mxpi_imageresize" + str(self.index),
                "modelPath": self.deploy_files.get('om_file')
            }
            props_post_process = {
                "dataSource": self.plugin_name + str(self.index),
                "postProcessConfigPath": self.deploy_files.get('post_process_file'),
                "labelPath": self.deploy_files.get('label_file'),
                "postProcessLibPath": "libSsdMobilenetFpn_MindsporePost.so"
            }
            imageresize0 = PluginNode("mxpi_imageresize", props_imageresize)(self.prev_plugins['mxpi_imagedecoder'])
            self.plugins.append(imageresize0)
            tensorinfer0 = PluginNode("mxpi_tensorinfer", props_tensorinfer,
                                      name=self.plugin_name + str(self.index))(imageresize0)
            self.plugins.append(tensorinfer0)
            objectpostprocessor0 = PluginNode("mxpi_objectpostprocessor", props_post_process,
                                              name=self.plugin_name + str(self.index) + '_post_process')(tensorinfer0)
            self.plugins.append(objectpostprocessor0)
  3. Define get_model_with_type(model_list: str, application: str): The key field must be the same as the model_name field defined in the model training scenario, and the value field must be the same as that defined in the custom scenario. The following shows the mapping of the model name between the model training scenario and the custom scenario to help users invoke the custom scenario.
    In the following example, the value of the new scenario class label is the same as that of the model_name field defined in the model training scenario.
    deployers_map = {yers_map = {
         "ssd_mobilenetv1_fpn": NewSceneDeployer
     }
  4. Define get_model_with_type(model_list: str, application: str): The second input parameter must be the same as the value of the application field defined in the model training scenario. This function is used to obtain the inference model label for further obtaining the model training configuration information.
    The following example uses deployers_map to invoke the new scenario class:
    def detection_process(model_list, model_path, deployers_map, prev_plugins):
         detection_models = get_model_with_number(model_list, 'det')
         if len(detection_models) == 0:
             print(f'Error: find application model failed.')
             return False
         detection_model = detection_models[0]
         json_data = parse_train_params(detection_model['result_path'])
         if json_data == False:
             return False
         deployer = deployers_map.get(json_data['model'])
         if deployer is None:
             print('Error: unsupported model to deploy, model name: %s' % detection_model['alias_model_name'])
             return False
         dep = deployer(detection_model, 'detection_model', json_data, 0, prev_plugins)
         if not dep.run(model_path):
             return False
         return dep.plugins
  5. Archive the inference model and configuration file. The path must be the same as that of the result_path field in the project configuration JSON file delivered by the platform.
    |-- mxAOIService/scripts/om_cfg/
    |-- model_service_param.json
    |-- det/
         |-- ssd_mobilenetv1_fpn_best.om
         |-- train_params.config

Debugging Methods

After installing the custom inference service package, run the python mxAOIService/scripts/infer_service_generation.py --model_params=./scripts/om_cfg/ model_service_param.json command to check the file content.

|-- mxAOIService/config/models/project_det/
|-- model_configs.json
|-- project_det/
     |-- assm1-2
          |-- 1
               |-- label_0.names
               |-- post_process_0.cfg
               |-- mindx_sdk.pipeline
               |-- ssd_mobilenetv1_fpn_best.om