Overview
The two-step Visualize pattern separates concerns between your primary LLM and the UI generation layer:- Step 1 - Business Logic: Your primary LLM handles user requests, executes tool calls, and generates a final text/markdown response
- Step 2 - UI Generation: C1 Visualize API converts the text response into interactive, generative UI components
- You want to use your existing LLM infrastructure for tool-calling
- You need to maintain complete conversation history with your primary LLM
- You want to add beautiful UI generation without refactoring your agent logic
For most applications, we recommend using C1 as the Gateway LLM instead, which provides lower latency and better context awareness.
Architecture
Key Concepts
Message Types
The pattern uses two distinct message storage strategies: AIMessage: Complete conversation history with your primary LLM- Includes user prompts, assistant responses, tool calls and their results
- Essential for maintaining context in subsequent LLM calls
- User prompts and final generative UI responses from C1
- Used for rendering the chat interface
The Two-Step Flow
- Step 1: Call your primary LLM with full conversation context, allowing it to use tools and generate a complete response
- Step 2: Send the final LLM response to C1 Visualize API, which transforms it into rich UI components without needing tool access
Setup
1. Install Dependencies
2. Environment Variables
Create a.env file with your API keys:
You can create a new API key from Developer Console
3. Configure OpenAI Clients
Create two OpenAI client instances - one for your standard LLM calls and one for C1 Visualize:Implementation
Backend API Route
Create an API route that orchestrates the two-step process:app/api/chat/route.ts
Frontend Component
Use the GenUI SDK to render the chat interface:app/page.tsx
System Prompt Best Practices
When crafting your system prompt for the primary LLM, include guidance for the downstream visualizer:- Mention the Visualizer: Inform the LLM that its output will be processed by a UI generator
- Provide Context: Explain that the visualizer doesn’t have access to tools or database
- Request Structure: Ask for actionable elements like buttons, forms, and links
- Format Hints: Suggest specific UI components when appropriate (tables, charts, etc.)
Example System Prompt
Message Persistence
Store two types of messages with distinct purposes: AIMessage Table- Stores complete LLM conversation history including tool calls and results
- Used for context in subsequent LLM calls
- Stores user-facing messages with generative UI responses
- Used for rendering chat interface
Example Prisma Schema
schema.prisma
Troubleshooting
C1 generates generic UI instead of domain-specific components
C1 generates generic UI instead of domain-specific components
Enhance your system prompt to provide more context about expected UI patterns and include relevant metadata in the assistant’s final message.
Tool calls not executing properly
Tool calls not executing properly
Verify your tools array is correctly formatted and tool functions return valid JSON strings:
Message history growing too large
Message history growing too large
Implement message summarization or truncation for older messages while keeping recent context:
Streaming not working in production
Streaming not working in production
Check that your hosting platform supports streaming responses. For example, on Vercel use Edge Functions instead of Serverless Functions:
app/api/chat/route.ts
Best Practices
- Separate Concerns: Keep business logic in your primary LLM and UI generation in C1
- Rich Context: Provide detailed information in the final assistant message for better UI generation
- Actionable Elements: Always include next-step actions (buttons, forms) in your responses
- Message Storage: Store both AI and UI message histories for optimal context management
- Error Recovery: Implement graceful fallbacks if either API call fails
- Testing: Test the full two-step flow with various query types and edge cases
Full Example on GitHub
See a complete working implementation of this pattern in our e-commerce agent example with tools, database integration, and production-ready code.