Information Query URI
- Query inference server information.
Response Example:
{'errorCode': 0, 'errorMsg': 'Succeeded!', 'isSuccess': true, 'outputs': [{'server_name': 'StreamServer'}]} - Check whether the inference server is running.GET v2/live
Parameter
Description
Data Type
isLive
Indicates whether the inference service is running.
Boolean
Response Example:{'errorCode': 0, 'errorMsg': 'Succeeded!', 'isSuccess': true, 'outputs': [{'isLive': True}]} - Check whether the inference server is ready to accept inference requests.
- Check whether a model or stream is ready to accept inference requests.
GET v2/streams/${STREAM_NAME}/ready
GET v2/models/${MODEL_NAME}/readyParameter
Description
Data Type
isReady
Indicates whether the model or stream is ready to receive inference requests.
Boolean
Response Example:{'errorCode': 0, 'errorMsg': 'Succeeded!', 'isSuccess': true, 'outputs': [{'isReady': True}]}
${MODEL_NAME} and ${STREAM_NAME} must be character strings, which consist of letters, digits, and special characters (+, -, and _).
- Query the configuration of a model or stream.
GET v2/streams/${STREAM_NAME}/config
GET v2/models/${MODEL_NAME}/configParameter
Description
Data Type
streamsConfig/modelsConfig
Inference configuration of the model or stream
JSON string
Response Example:{'errorCode': 0, 'errorMsg': 'Succeeded!', 'isSuccess': true, 'outputs': [{'ModelSample': {'deviceId': 1, 'dynamicBatching': {'dynamicStrategy': 'Nearest', 'preferredBatchSize': [1, 2, 4, 8], 'singleBatchInfer': 0, 'waitingTime': 100}, 'inferType': 'models', 'inputs': [{'dataType': 'UINT8', 'format': 'FORMAT_NHWC', 'id': 0, 'name': 'Placeholder', 'shape': [-1, 224, 224, 3]}], 'name': 'resnet50', 'outputs': [{'dataType': 'FLOAT32', 'format': 'FORMAT_NONE', 'id': 0, 'name': 'fp32_vars/dense/BiasAdd:0', 'shape': [-1, 1001]}], 'timeoutMs': 3000}}]}
${MODEL_NAME} and ${STREAM_NAME} must be character strings, which consist of letters, digits, and special characters (+, -, and _).
Parent topic: RESTful APIs