Stream Inference Configuration File
For stream inference, the current inference service uses the SendMultiDataWithUniqueId and GetMultiResultWithUniqueId APIs of MxStreamManager class to send and obtain data. For details, see the description of the two APIs in this document. Currently, this set of APIs does not support data input and result obtaining from an intermediate plugin. You need to configure the pipeline file based on the preceding requirements. You are advised to use SendMultiDataWithUniqueId and GetMultiResultWithUniqueId of the C++ and Python APIs to debug and test the pipeline file. After ensuring that the result meets the expectation, deploy the pipeline file to the inference service.
For the shape vector in the inference configuration item, if one of its dimensions is -1, the dimension is variable. The actual value (-1 excluded) of each dimension must be specified in the request sent by the client.
{
"PipelineSample": {
"inferType": "streams",
"name": "classification+detection",
"path": "../pipeline/Sample.pipeline",
"deviceId": 1,
"timeoutMs": 3000,
"inputs": [{
"name": "appsrc0",
"id": 0,
"dataType": "BINARY",
"format": "FORMAT_NONE",
"shape": [-1]
}],
"outputs": [{
"name": "appsink0",
"id": 0,
"dataType": "STRING",
"format": "FORMAT_NONE",
"shape": [-1]
}]
}
}