Overview
Here is a visual overview of the entire process, from a user’s query to the final rendered UI:- User enters a prompt on the UI
- UI sends the prompt to your backend API
- Backend calls the C1 API with the prompt, history, system instructions.
- C1 API returns a UI specification object (C1 DSL) which is generated by existing LLM API based on the model selected while making the call.
- Backend relays the C1 Response to the UI
- UI renders the C1 Response using
<C1Component />
The Backend API Call
The core integration pattern involves your backend acting as an intermediary between your UI client and the C1 API. This allows you to add business logic, prepare data before calling C1 and secure your API keys. The C1 API is OpenAI-compatible, so you can use the officialopenai
client library.
If you already have openai integrated, the only change required is to configure the client with your Thesys API key and the C1 baseURL
.
Before making the call to C1, your server can enrich the user’s prompt with additional context, such as conversation history,
system instructions, or integrate data from your database.
main.py
UI Rendering
The C1 React SDK provides<C1Component>
to handle the rendering.
It is reponsible to take the response returned from C1 and render it as interactive React components.
If you are building a chat interface, you can use <C1Chat>
.
It provides everything including chat history, user thread management. This drastically reduces the development time.
Selecting a model
The C1 API supports multiple models. You can select the model based on your use case. Current supported models are:- Anthropic - Claude Sonnet 4
- OpenAI - GPT-5
Summary
C1 gives you full control over the end-to-end experience. Your backend orchestrates the call by adding context and business logic.And your UI is responsible for fetching the resulting C1 Response and using the C1 SDK to render the final interactive UI.