使用vllm兼容OpenAI接口

v1/chat接口

推理接口,将请求体中的stream参数改为false即为文本推理,改为true即为流式推理:

curl -H "Accept: application/json" -H "Content-type: application/json" --cacert ca.pem --cert client.pem  --key client.key.pem -X POST -d '{
    "model": "gpt-3.5-turbo",
    "messages": [{
      "role": "system",
      "content": "You are a helpful assistant."
    }],
    "max_tokens": 20,
    "presence_penalty": 1.03,
    "frequency_penalty": 1.0,
    "seed": null,
    "temperature": 0.5,
    "top_p": 0.95,
    "stream": false
}' https://127.0.0.1:1025/v1/chat/completions

v1/completions接口

推理接口,将请求体中的stream参数改为false即为文本推理,改为true即为流式推理:

curl -H "Accept: application/json" -H "Content-type: application/json" --cacert ca.pem --cert client.pem  --key client.key.pem -X POST -d '{
    "model": "gpt-3.5-turbo",
    "prompt": "You are a helpful assistant.",
    "max_tokens": 20,
    "presence_penalty": 1.03,
    "frequency_penalty": 1.0,
    "seed": null,
    "temperature": 0.5,
    "top_p": 0.95,
    "stream": false
}' https://127.0.0.1:1025/v1/completions

其他接口请参见兼容OpenAI接口章节。