POST
/
v1
/
embed
/
chat
/
completions
  • Drop in replacement for existing LLM endpoint that can be used to generate UI rather than text completions.
  • Supports tool calling for fetching external data.
  • Ability to steer semantic design via system prompts.
  • Visual customization via Crayon

Request

Supports both streaming and non-streaming payloads.

Response

Returns streaming chunks in streaming mode or a message object in non-streaming mode.