昇腾社区首页
中文
注册

定义sample_scene

“sample_scene.py”定义了生成Stream业务流配置、准备模型、后处理配置文件等流程。

编写要点

  1. 定义推理服务生成的函数。

    get_pipeline_name:返回流程编排的主函数sample_scene_pipeline

    def get_pipeline_name():
         return "sample_scene_pipeline"

    流程编排主函数:实现自定义场景业务流程编排,通过上述绑定的映射关系调用流程编排主函数。

    def sample_scene_pipeline(task_name, scene_data, output_path):
        # 将样例的模型名与继承的Deployer对应,以下用ssd举例说明
        deployers_map = {
            "ssd_mobilenetv1_fpn": NewSceneDeployer
        }
        model_list = scene_data['model_list']
        # 创建文件夹
        model_path = create_scene_dir(task_name, scene_data, output_path)
        # 添加解码等插件
        input_plugins = {}
        add_decode_plugins(input_plugins)
        # 添加检测模型
        det_plugins = detection_process(model_list, model_path, deployers_map, input_plugins)
        serial_keys = det_plugins[-1].plugin_name()
        props_dataserialize = {
            "outputDataKeys": serial_keys
        }
        # 添加序列化等插件
        dataserialize0 = PluginNode("mxpi_dataserialize", props_dataserialize)(det_plugins[-1])
        appsink0 = PluginNode("appsink")(dataserialize0)
    
        f = FunctionalStream('stream', [input_plugins.get('appsrc')], [appsink0])
        f.set_device_id('0')
        f.build()
    
        pipeline = {}
        pipeline_key = f'{scene_data["project_name"]}_{task_name}'
        pipeline[pipeline_key] = json.loads(f.to_json())
        pipeline_file = write_pipeline(pipeline, model_path)
        return pipeline_file
  2. 定义自定义场景类。

    通过继承现有场景Deployer实现generate_plugins函数,该函数是实现Stream业务流的主要步骤。

    若不继承已有场景,则需要继承ModelDeployer,同时还需实现generate_deploy_files函数,该函数用于模型准备、生成后处理配置文件。

    如下示例通过继承现有场景SSDDeployer定义了新场景NewSceneDeployer类实现Stream业务流,实现了新场景的关键插件串联关系。
    class NewSceneDeployer(SSDDeployer): # 继承已有场景SSDDeployer
        def generate_plugins(self):
            props_imageresize = {
                "dataSource": "mxpi_imagedecoder0",
                "handleMethod": "opencv"
            }
            props_tensorinfer = {
                "dataSource": "mxpi_imageresize" + str(self.index),
                "modelPath": self.deploy_files.get('om_file')
            }
            props_post_process = {
                "dataSource": self.plugin_name + str(self.index),
                "postProcessConfigPath": self.deploy_files.get('post_process_file'),
                "labelPath": self.deploy_files.get('label_file'),
                "postProcessLibPath": "libSsdMobilenetFpn_MindsporePost.so"
            }
            imageresize0 = PluginNode("mxpi_imageresize", props_imageresize)(self.prev_plugins['mxpi_imagedecoder'])
            self.plugins.append(imageresize0)
            tensorinfer0 = PluginNode("mxpi_tensorinfer", props_tensorinfer,
                                      name=self.plugin_name + str(self.index))(imageresize0)
            self.plugins.append(tensorinfer0)
            objectpostprocessor0 = PluginNode("mxpi_objectpostprocessor", props_post_process,
                                              name=self.plugin_name + str(self.index) + '_post_process')(tensorinfer0)
            self.plugins.append(objectpostprocessor0)
  3. 定义get_model_with_type(model_list: str, application: str):key值与模型训练场景定义中的“model_name”字段的值保持一致,value值与自定义场景类保持一致,该map是为了将模型训练场景定义的模型名字与用户自定义场景类做映射,便于调用用户自定义场景类。
    如下示例将新场景类标签与模型训练场景定义中的“model_name”字段的值保持一致。
    deployers_map = {yers_map = {
         "ssd_mobilenetv1_fpn": NewSceneDeployer
     }
  4. 定义get_model_with_type(model_list: str, application: str):第二个入参必须要与模型训练场景定义中的“application”字段的值保持一致性,该函数是为了获取推理模型标签,方便后续更进一步获取对应模型的训练配置信息。
    如下示例通过上述deployers_map调用新场景类。
    def detection_process(model_list, model_path, deployers_map, prev_plugins):
         detection_models = get_model_with_number(model_list, 'det')
         if len(detection_models) == 0:
             print(f'Error: find application model failed.')
             return False
         detection_model = detection_models[0]
         json_data = parse_train_params(detection_model['result_path'])
         if json_data == False:
             return False
         deployer = deployers_map.get(json_data['model'])
         if deployer is None:
             print('Error: unsupported model to deploy, model name: %s' % detection_model['alias_model_name'])
             return False
         dep = deployer(detection_model, 'detection_model', json_data, 0, prev_plugins)
         if not dep.run(model_path):
             return False
         return dep.plugins
  5. 归档推理模型及配置文件,与平台下发的项目配置json文件中的“result_path”字段保持一致。
    |-- mxAOIService/scripts/om_cfg/
    |-- model_service_param.json
    |-- det/
         |-- ssd_mobilenetv1_fpn_best.om
         |-- train_params.config

调试方法

安装自定义推理服务包后,执行命令python mxAOIService/scripts/infer_service_generation.py --model_params=./scripts/om_cfg/ model_service_param.json,检查文件内容。

|-- mxAOIService/config/models/project_det/
|-- model_configs.json
|-- project_det/
     |-- assm1-2
          |-- 1
               |-- label_0.names
               |-- post_process_0.cfg
               |-- mindx_sdk.pipeline
               |-- ssd_mobilenetv1_fpn_best.om