Pipeline Generation in Custom Scenarios
The {pipeline generation in custom scenarios}.py script defines the process of configuring the stream service flow, preparing a model, and postprocessing the configuration file.
Key Points
- Define the function generated by the inference service.
get_pipeline_name: returns the main function name of the process orchestration. Execute the main function script infer_service_generation.py of the inference service when a custom scenario is scanned, and bind the main function name of the custom process orchestration to the name of the custom scenario.
Main function for process orchestration: orchestrates service processes in custom scenarios. Invoke the main function for process orchestration based on the preceding bound mapping.
- Orchestrate the process using mxManufacture.
Process orchestration defines the model serial connections in the scenario-based service process. The process orchestrated by mxManufacture helps quickly develop high-performance applications.
See the following example.from mindx.sdk.stream import PluginNode, FunctionalStream if __name__ == '__main__': # Define the attributes of the mxpi_tensorinfer plugin. props0 = { "dataSource": "mxpi_imageresize0", "modelPath": "./ssd_mobilenetv1_fpn_best.om" # Use the actual model path. } # Define the attributes of the mxpi_objectpostprocessor plugin. props1 = { "dataSource": "mxpi_tensorinfer0", "postProcessConfigPath": "./post_process_0.cfg", # Use the actual configuration file. "labelPath": "./label_0.names", # Use the actual configuration file. "postProcessLibPath": "libSsdMobilenetFpn_MindsporePost.so" } # Define the attributes of the mxpi_objectpostprocessor plugin. props2 = { "outputDataKeys":"mxpi_objectpostprocessor0" } # Connects models in series appsrc0 = PluginNode("appsrc") mxpi_imagedecoder0 = PluginNode("mxpi_imagedecoder")(appsrc0) mxpi_imageresize0 = PluginNode("mxpi_imageresize")(mxpi_imagedecoder0) mxpi_tensorinfer0 = PluginNode("mxpi_tensorinfer", props0)(mxpi_imageresize0) mxpi_objectpostprocessor0 = PluginNode("mxpi_objectpostprocessor", props1)(mxpi_tensorinfer0) mxpi_dataserialize0 = PluginNode("mxpi_dataserialize", props2)(mxpi_objectpostprocessor0) appsink0 = PluginNode("appsink")(mxpi_dataserialize0) # Generate a pipeline. f=FunctionalStream('stream', [appsrc0], [appsink0]) f.set_device_id('0') f.build() print('json pipeline:' + f.to_json()) - Archive the inference model and configuration file. The path for archiving the inference model and configuration file should comply with the result_path field in the project configuration JSON file delivered by the platform.
Debugging Methods
After the custom inference service package is installed, run the following command:
python mxAOIService/scripts/infer_service_generation.py --model_params=path_of_the_project_configuration_JSON_file_delivered_by_the_platform
--model_params indicates the path of the project configuration JSON file delivered by the platform.
- Check whether the mxAOIService/config/models/model_configs.json file is complete.
- Check whether the mxAOIService/config/models/project_*/assm1-2/1/ model is ready.
- Check whether the pipeline of mxAOIService/config/models/project_*/assm1-2/1/ is complete.
- Check whether the post-processing configuration file of mxAOIService/config/models/project_*/assm1-2/1/ is complete.