Inference Request URI

  • The URI path of the inference service can contain only letters, digits, and special characters (+, -, _, and /). The URI path length ranges from 3 to 255. URIs that do not meet the preceding conditions will be identified as invalid URIs. And the server returns an error code indicating an invalid URI to the client.
  • Do not add parameters or invalid characters to the URI. Otherwise, the URI may be regarded as an invalid URL and returned to the client.
  • Do not write sensitive information to the URI.
  1. The request API for streams inference is POST v2/streams/${STREAM_NAME}/infer,

    in which ${STREAM_NAME} indicates the inference stream name.

  2. The request API for single-model inference is POST v2/models/${MODEL_NAME}/infer,

    in which ${MODEL_NAME} indicates the model name.

    The request body of an inference request must contain the inputs key whose corresponding value must be a list. For details, see the JSON field in the inference request in Request Configuration Items. For details about the tensors corresponding to the inputs key, see Table 2 in Inference Configuration Items. For an inference request, you need to set the last parameter data in the table.

    The following table describes the parameters in the inference response JSON field in Request Configuration Items.

    Table 1 Inference request response fields

    Field Name

    Description

    Data Type

    isSuccess

    Specifies whether the inference is successful.

    Boolean

    errorCode

    Error code.

    int

    errorMsg

    Error message.

    String

    outputs

    Output tensor.

    Tensor

    • ${MODEL_NAME} and ${STREAM_NAME} must be character strings, which consist of letters, digits, and special characters (+, -, and _).
    • For POST requests, content_type must be set to application/json.