Skip to main content
Google ADK (Agent Development Kit) is Google’s official Python framework for building intelligent agents with conversational AI capabilities. It provides a model-agnostic agent orchestration layer that works with various LLM providers through LiteLLM integration. This guide demonstrates how to integrate Google ADK with Thesys C1 to create a full-stack chat application with generative UI. You’ll leverage ADK’s agent framework, session management, and streaming capabilities while using C1 models through OpenAI-compatible APIs. We’ll build a complete application in two parts:
  • Backend: A FastAPI server with Google ADK’s LlmAgent framework and Thesys C1 models
  • Frontend: A React interface with C1Chat for rich conversational UI
This guide assumes you have basic knowledge of Python, FastAPI, and React. You’ll also need a Thesys API key from the C1 Console.

Part 1: Backend Implementation

The backend uses Google ADK’s agent framework with LiteLLM to create an intelligent assistant that processes messages and streams responses via Thesys C1.

Set up the project structure

We’ll create separate directories for backend and frontend to keep concerns separated. Create a new directory for your Google ADK project:
mkdir google-adk-c1
cd google-adk-c1
mkdir backend frontend

Install Python dependencies

We need FastAPI for the web server, Google ADK for the agent framework, LiteLLM for model integration, and uvicorn for serving the application. Create a requirements.txt file in the backend directory:
backend/requirements.txt
fastapi
uvicorn[standard]
python-dotenv
python-multipart
litellm
google-adk
Create and activate a virtual environment:
cd backend
python -m venv venv

# On macOS/Linux:
source venv/bin/activate

# On Windows:
# venv\Scripts\activate

# Install dependencies
pip install -r requirements.txt

Create configuration

Centralize all configuration in one file to make it easy to manage API keys, models, and settings. ADK uses LiteLLM which expects OpenAI-compatible environment variables. Create a config.py file for environment configuration:
backend/config.py
import os
from dotenv import load_dotenv

load_dotenv()

# OpenAI Configuration (can use OpenAI, Thesys, or any OpenAI-compatible API)
THESYS_API_KEY = os.getenv("THESYS_API_KEY", "")
THESYS_BASE_URL = "https://api.thesys.dev/v1/embed"
# Use litellm format: "openai/model-name" for LiteLLM in ADK
THESYS_MODEL = "openai/c1/anthropic/claude-sonnet-4/v-20251230"

# Server Configuration
PORT = int(os.getenv("PORT", "8000"))
FRONTEND_URL = os.getenv("FRONTEND_URL", "http://localhost:5173")

# Google ADK Configuration
APP_NAME = "c1chat_assistant"
USER_ID = "unknown"

# System Prompt
SYSTEM_PROMPT = """You are a helpful AI assistant powered by Google's Agent Development Kit (ADK) with OpenAI.
You leverage ADK's agent framework for orchestration while using OpenAI models for generation.
Be friendly, concise, and helpful in your responses."""

Create the ADK agent

The agent uses Google ADK’s LlmAgent framework with LiteLLM to integrate with OpenAI-compatible APIs (like Thesys). ADK handles session management, streaming, and conversation orchestration. Create the Google ADK assistant agent:
backend/agents/assistant.py
"""
Assistant Agent - Main conversational agent using Google Agent Development Kit (ADK).
Uses OpenAI client through ADK's LiteLLM wrapper for model-agnostic agent framework.
Integrates with C1Chat interface through streaming responses.
"""

from typing import AsyncGenerator
import os
from google.genai.types import Content, Part
from google.adk.agents import LlmAgent
from google.adk.models.lite_llm import LiteLlm
from google.adk.sessions import InMemorySessionService
from google.adk.runners import Runner
from google.adk.agents.run_config import RunConfig, StreamingMode
from config import (
    THESYS_API_KEY,
    THESYS_BASE_URL,
    THESYS_MODEL,
    SYSTEM_PROMPT,
    APP_NAME,
    USER_ID,
)


class AssistantAgent:
    """
    Assistant agent using Google ADK with OpenAI client via LiteLLM.
    Leverages ADK's agent framework while using OpenAI (or compatible) models.
    """

    def __init__(self):
        """Initialize the assistant agent with Google ADK + OpenAI."""
        # Set OpenAI API key for LiteLLM
        os.environ["OPENAI_API_KEY"] = THESYS_API_KEY
        if THESYS_BASE_URL:
            os.environ["OPENAI_API_BASE"] = THESYS_BASE_URL

        # Create LiteLLM model instance pointing to OpenAI
        model = LiteLlm(model=THESYS_MODEL)

        # Define the ADK agent with OpenAI model
        self.agent = LlmAgent(
            model=model,
            name="c1",
            instruction=SYSTEM_PROMPT,
            tools=[],  # Add tools here as needed
        )

        # Session service for managing conversation state
        self.session_service = InMemorySessionService()

        # Runner to execute agent
        self.runner = Runner(
            app_name="c1chat_assistant",
            agent=self.agent,
            session_service=self.session_service,
        )

    async def process_message(
        self, thread_id: str, user_message: str
    ) -> AsyncGenerator[str, None]:
        """
        Process a user message and stream the response using Google ADK + OpenAI.

        Args:
            thread_id: Unique identifier for the conversation thread
            user_message: The user's message content

        Yields:
            Chunks of the assistant's response in SSE format
        """
        # Create content for ADK
        content = Content(role="user", parts=[Part(text=user_message)])
        session = await self.session_service.get_session(
            app_name=APP_NAME, user_id=USER_ID, session_id=thread_id
        )

        if not session:
            # If it doesn't exist, create it explicitly
            session = await self.session_service.create_session(
                app_name=APP_NAME, user_id=USER_ID, session_id=thread_id
            )

        # Configure streaming mode for real-time chunk-by-chunk streaming
        run_config = RunConfig(
            streaming_mode=StreamingMode.SSE,
            response_modalities=["TEXT"],
        )

        # Run the agent with streaming enabled
        async for event in self.runner.run_async(
            user_id=USER_ID,
            session_id=session.id,
            new_message=content,
            run_config=run_config,
        ):
            if event.content and event.content.parts:
                for part in event.content.parts:
                    if part.text:
                        yield part.text


# Global agent instance
assistant_agent = AssistantAgent()
Create the __init__.py file:
backend/agents/__init__.py
from .assistant import assistant_agent

__all__ = ["assistant_agent"]

Create the FastAPI server

The server exposes the /api/chat endpoint that C1Chat will connect to, handling CORS and streaming responses. ADK’s runner handles the heavy lifting of agent execution. Create the main FastAPI server with streaming support:
backend/main.py
"""
FastAPI Server for Google ADK + C1Chat Integration
Provides streaming chat endpoint compatible with C1Chat component.
"""

from fastapi import FastAPI, HTTPException
from fastapi.middleware.cors import CORSMiddleware
from fastapi.responses import StreamingResponse
from pydantic import BaseModel
from typing import Optional
import uvicorn

from agents.assistant import assistant_agent
from config import PORT, FRONTEND_URL


# Request/Response Models
class ChatMessage(BaseModel):
    """Chat message model"""
    role: str
    content: str
    id: Optional[str] = None


class ChatRequest(BaseModel):
    """Chat request model compatible with C1Chat"""
    prompt: ChatMessage
    threadId: str
    responseId: Optional[str] = None


# Initialize FastAPI app
app = FastAPI(
    title="Google ADK + C1Chat API",
    description="Backend API for Google ADK with C1Chat integration",
    version="1.0.0",
)

# Configure CORS
app.add_middleware(
    CORSMiddleware,
    allow_origins=[FRONTEND_URL, "http://localhost:5173", "http://localhost:3000"],
    allow_credentials=True,
    allow_methods=["*"],
    allow_headers=["*"],
)


@app.get("/")
async def root():
    """Health check endpoint"""
    return {
        "status": "ok",
        "message": "Google ADK + C1Chat API is running",
        "version": "1.0.0",
    }


@app.get("/health")
async def health():
    """Health check endpoint"""
    return {"status": "healthy"}


@app.post("/api/chat")
async def chat(request: ChatRequest):
    """
    Chat endpoint compatible with C1Chat component.
    Streams responses from the ADK agent.

    Args:
        request: ChatRequest containing the user's message and thread ID

    Returns:
        StreamingResponse with text/event-stream content
    """
    try:
        # Extract user message
        user_message = request.prompt.content
        thread_id = request.threadId
        # Return streaming response
        return StreamingResponse(
            assistant_agent.process_message(thread_id, user_message),
            media_type="text/event-stream",
            headers={
                "Cache-Control": "no-cache, no-transform",
                "Connection": "keep-alive",
            },
        )

    except Exception as e:
        print(f"Chat endpoint error: {str(e)}")
        raise HTTPException(status_code=500, detail=str(e))


if __name__ == "__main__":
    print(f"Starting Google ADK + C1Chat server on port {PORT}")
    print(f"Frontend URL: {FRONTEND_URL}")
    print(f"API will be available at: http://localhost:{PORT}/api/chat")

    uvicorn.run("main:app", host="0.0.0.0", port=PORT, reload=True, log_level="info")

Set up environment variables

Store your Thesys API key securely in environment variables instead of hardcoding it. Create a .env file in the backend directory:
backend/.env
# Required: Your Thesys API key
THESYS_API_KEY=your_thesys_api_key_here

# Optional: Server configuration
PORT=8000
FRONTEND_URL=http://localhost:5173

Test the backend

Verify the server starts correctly and the health endpoint responds before building the frontend. Run the backend server:
cd backend
python main.py
The server will start on http://localhost:8000. You can test the health endpoint:
curl http://localhost:8000/health

Part 2: Frontend Implementation

Now let’s create a React frontend that integrates with our Google ADK backend using C1Chat.

Set up the frontend project

We’ll use Vite with React and TypeScript for fast development and type safety. Create a new React project with Vite:
cd ..
npm create vite@latest frontend -- --template react-ts
cd frontend

Install dependencies

The Thesys GenUI SDK provides the C1Chat component that connects to our backend. Install the necessary packages for C1Chat integration:
npm install @thesysai/genui-sdk

Create the main App component

C1Chat handles all the UI complexity - just point it to your backend API endpoint. Create the main App component with C1Chat:
src/App.tsx
import { C1Chat } from "@thesysai/genui-sdk";

function App() {
  const apiUrl =
    import.meta.env.VITE_API_URL || "http://localhost:8000/api/chat";

  return <C1Chat apiUrl={apiUrl} />;
}

export default App;

Update the main entry point

Import C1 styles globally to ensure the chat interface renders correctly. Update the main.tsx file to import C1 styles:
src/main.tsx
import { StrictMode } from 'react'
import { createRoot } from 'react-dom/client'
import '@thesysai/genui-sdk/styles'
import App from './App.tsx'

createRoot(document.getElementById('root')!).render(
  <StrictMode>
    <App />
  </StrictMode>,
)

Set up environment variables (optional)

Override the default API URL if your backend runs on a different port or domain. Create a .env file in the frontend directory:
frontend/.env
# Backend API URL (optional, defaults to localhost:8000)
VITE_API_URL=http://localhost:8000/api/chat

Run the frontend

Start the development server with hot reload for instant updates during development. With your backend server already running, start the frontend:
npm run dev
Visit http://localhost:5173 to interact with your Google ADK + C1Chat application!

Running Both Servers

You’ll need two terminal windows: Terminal 1 - Backend:
cd backend
source venv/bin/activate  # Activate venv
python main.py
Terminal 2 - Frontend:
cd frontend
npm run dev
Open http://localhost:5173 in your browser.

Understanding Google ADK Integration

This implementation leverages several key components of Google ADK:

ADK Components Used

  1. LlmAgent: The core agent class that orchestrates conversations with instructions and optional tools
  2. LiteLlm: ADK’s model adapter that supports OpenAI-compatible APIs through LiteLLM
  3. InMemorySessionService: Manages conversation sessions and history across multiple threads
  4. Runner: Executes agent operations with streaming support and session management
  5. StreamingMode.SSE: Server-Sent Events streaming for real-time responses

Why Use Google ADK?

  • Model Agnostic: Switch between different LLM providers without changing agent code
  • Session Management: Built-in conversation history and state management
  • Tool Integration: Easy addition of function calling and external tools
  • Production Ready: Includes proper session handling, error management, and streaming
  • Framework Features: Leverage ADK’s orchestration, multi-agent support, and extensibility

Adding Tools to Your Agent

You can extend the agent with tools (function calling) by adding them to the LlmAgent:
from google.adk.tools import Tool

# Define a custom tool
weather_tool = Tool(
    name="get_weather",
    description="Get current weather for a location",
    # ... tool implementation
)

# Add to agent
self.agent = LlmAgent(
    model=model,
    name="c1",
    instruction=SYSTEM_PROMPT,
    tools=[weather_tool],  # Add your tools here
)

Example Queries

Try these example queries to test your application:
  • “Create a task list for planning a vacation”
  • “Show me a comparison table of programming languages”
  • “Generate a chart showing my weekly expenses”
  • “Create a form to collect user feedback”
  • “Help me organize my study schedule”

View the code

Find the complete code and more examples on our GitHub repository.