Stream LLM Response (v2)

Send messages using OpenAI-compatible format and receive streaming responses.

Request Parameters

ParameterRequiredDescription
messagesYesArray of message objects with role and content
toolsNoArray of tool definitions for function calling

Message Format

Each message object must contain:

  • role: "user", "assistant", or "system"
  • content: The message text

Response Format

Returns a Server-Sent Events (SSE) stream with JSON objects containing:

  • status: "success" or "fail"
  • type: "message", "tool_call", or "finish"
  • content: The response content
  • role: The assistant's role

Example

Select "LLMV2Request" from the Examples dropdown to see the request format.

Language
Response
Click Try It! to start a request and see the response here!