For the purpose of demonstration, this guide will build a GenUI search application where the user can search for information related to companies. The guide can be broken down into 2 sections:

  1. Implementing the backend
  2. Implementing the frontend

A complete example implementation of the search application demonstrating C1Component usage can be found here.

This guide is much easier to follow if you’ve already completed the Quickstart. You can simply replace code in existing files and create new files as required!

Implementing the backend

1

Add a system prompt

For the search application, the following system prompt can be used to tailor the UI output to enable the user to search companies ask follow up questions:

app/api/chat/systemPrompt.ts
export const systemPrompt = `
  You are a business research assistant just like crunchbase. You answer questions about a company or domain.
  given a company name or domain, you will search the web for the latest information.

  At the end of your response, add a form with single input field to ask for follow up questions.
`;

To learn more about system prompts and how you can use them to tailor the output of the LLM to your specific use-case, check out the Using System Prompts guide.

2

Add tools

In a search application, the agent may need a tool to search the web for up-to-date information. This guide adds a web search tool powered by Tavily to search the web and the packages - zod along with zod-to-json-schema to provide the schema for the tool.

To learn more about tool-calling and how you can use tools to extend the capabilities of your agent, check out the Tool Calling guide.

import { JSONSchema } from "openai/lib/jsonschema.mjs";
import { RunnableToolFunctionWithParse } from "openai/lib/RunnableFunction.mjs";
import { z } from "zod";
import { zodToJsonSchema } from "zod-to-json-schema";
import { tavily } from "@tavily/core";

const tavilyClient = tavily({ apiKey: process.env.TAVILY_API_KEY });

export const tools: [
  RunnableToolFunctionWithParse<{
    searchQuery: string;
  }>
] = [
  {
    type: "function",
    function: {
      name: "web_search",
      description:
        "Search the web for a given query, will return details about anything including business",
      parse: (input) => {
        return JSON.parse(input) as { searchQuery: string };
      },
      parameters: zodToJsonSchema(
        z.object({
          searchQuery: z.string().describe("search query"),
        })
      ) as JSONSchema,
      function: async ({ searchQuery }: { searchQuery: string }) => {
        const results = await tavilyClient.search(searchQuery, {
          maxResults: 5,
        });

        return JSON.stringify(results);
      },
      strict: true,
    },
  },
];
3

Create a backend endpoint

For the purpose of the search application, the API may only need the search query and the previous agent response (if any) so that the LLM can generate an appropriate response. Since the entire conversation history is not required, the API endpoint may look somewhat as follows:

import { NextRequest } from "next/server";
import OpenAI from "openai";
import { ChatCompletionMessageParam } from "openai/resources/chat/completions";
import { transformStream } from "@crayonai/stream";
import { tools } from "./tools";
import { systemPrompt } from "./systemPrompt";

const client = new OpenAI({
  baseURL: "https://api.thesys.dev/v1/embed",
  apiKey: process.env.THESYS_API_KEY,
});

export async function POST(req: NextRequest) {
  const { prompt, previousC1Response } = (await req.json()) as {
    prompt: string;
    previousC1Response?: string;
  };

  const runToolsResponse = client.beta.chat.completions.runTools({
    model: "c1-nightly",
    messages: [
      {
        // Add the system prompt to provide appropriate instructions to the agent on how to generate the response and what UI constraints to consider.
        role: "system",
        content: systemPrompt,
      },

      // If there was a previous agent response, the user prompt may be a follow up question. Add the previous response
      // to the messages sent to the LLM so that the agent can generate an appropriate response.
      ...((previousC1Response
        ? [{ role: "assistant", content: previousC1Response }]
        : []) as ChatCompletionMessageParam[]),

      { role: "user", content: prompt },
    ],
    stream: true,
    tools: tools,
  });

  const llmStream = await runToolsResponse;

  const responseStream = transformStream(llmStream, (chunk) => {
    return chunk.choices[0]?.delta?.content || "";
  });

  return new Response(responseStream as ReadableStream, {
    headers: {
      "Content-Type": "text/event-stream",
      "Cache-Control": "no-cache, no-transform",
      Connection: "keep-alive",
    },
  });
}

Implementing the frontend

Unlike the C1Chat component, the C1Component leaves it up to the user to manage state to provide greater flexibility. Therefore, the frontend implementation can be broken down into the following parts:

  1. Implementing state management
  2. Making the API call
  3. Implementing the UI
1

Implementing state management

For the search application, the following states should suffice:

  • query - The search query entered by the user
  • c1Response - For storing the response sent by the C1 API
  • isLoading - For tracking when a request is in progress
  • abortController - For managing request cancellation

The states required can depend on individual application requirements. Feel free to make changes to the states as per your use case.

A useUIState hook can be used to manage the state and provide a clean interface to UI code for accessing and modifying the state. Here’s an example implementation:

import { useState } from "react";
import { makeApiCall } from "./api";

/**
 * Type definition for the UI state.
 * Contains all the state variables needed for the application's UI.
 */
export type UIState = {
  /** The current search query input */
  query: string;
  /** The current response from the C1 API */
  c1Response: string;
  /** Whether an API request is currently in progress */
  isLoading: boolean;
};

/**
 * Custom hook for managing the application's UI state.
 * Provides a centralized way to manage state and API interactions.
 *
 * @returns An object containing:
 * - state: Current UI state
 * - actions: Functions to update state and make API calls
 */
export const useUIState = () => {
  // State for managing the search query input
  const [query, setQuery] = useState("");
  // State for storing the API response
  const [c1Response, setC1Response] = useState("");
  // State for tracking if a request is in progress
  const [isLoading, setIsLoading] = useState(false);
  // State for managing request cancellation
  const [abortController, setAbortController] =
    useState<AbortController | null>(null);

  /**
   * Wrapper function around makeApiCall that provides necessary state handlers.
   * This keeps the component interface simple while handling all state management internally.
   */
  const handleApiCall = async (
    searchQuery: string,
    previousC1Response?: string
  ) => {
    // makeApiCall will be implemented in the next step
    await makeApiCall({
      searchQuery,
      previousC1Response,
      setC1Response,
      setIsLoading,
      abortController,
      setAbortController,
    });
  };

  // Return the state and actions in a structured format
  return {
    state: {
      query,
      c1Response,
      isLoading,
    },
    actions: {
      setQuery,
      setC1Response,
      makeApiCall: handleApiCall,
    },
  };
};
2

Setting up the API call

Since the search application does not need the entire conversation history to function, it is sufficient to send the current user search query and the previous agent response (if any) to the backend for generating a response. This ensures that if the current query is a follow up question based on the previous search result, the LLM can generate an appropriate response.

A function makeApiCall can be implemented as follows:

/**
 * Type definition for parameters required by the makeApiCall function.
 * This includes both the API request parameters and state management callbacks.
 */
export type ApiCallParams = {
  /** The search query to be sent to the API */
  searchQuery: string;
  /** Optional previous response for context in follow-up queries */
  previousC1Response?: string;
  /** Callback to update the response state */
  setC1Response: (response: string) => void;
  /** Callback to update the loading state */
  setIsLoading: (isLoading: boolean) => void;
  /** Current abort controller for cancelling ongoing requests */
  abortController: AbortController | null;
  /** Callback to update the abort controller state */
  setAbortController: (controller: AbortController | null) => void;
};

/**
 * Makes an API call to the /api/chat endpoint with streaming response handling.
 * Supports request cancellation and manages loading states.
 *
 * @param params - Object containing all necessary parameters and callbacks
 */
export const makeApiCall = async ({
  searchQuery,
  previousC1Response,
  setC1Response,
  setIsLoading,
  abortController,
  setAbortController,
}: ApiCallParams) => {
  try {
    // Cancel any ongoing request before starting a new one
    if (abortController) {
      abortController.abort();
    }

    // Create and set up a new abort controller for this request
    const newAbortController = new AbortController();
    setAbortController(newAbortController);
    setIsLoading(true);

    // Make the API request with the abort signal
    const response = await fetch("/api/chat", {
      method: "POST",
      headers: {
        "Content-Type": "application/json",
      },
      body: JSON.stringify({
        prompt: searchQuery,
        previousC1Response,
      }),
      signal: newAbortController.signal,
    });

    // Set up stream reading utilities
    const decoder = new TextDecoder();
    const stream = response.body?.getReader();

    if (!stream) {
      throw new Error("response.body not found");
    }

    // Initialize accumulator for streamed response
    let streamResponse = "";

    // Read the stream chunk by chunk
    while (true) {
      const { done, value } = await stream.read();
      // Decode the chunk, considering if it's the final chunk
      const chunk = decoder.decode(value, { stream: !done });

      // Accumulate response and update state
      streamResponse += chunk;
      setC1Response(streamResponse);

      // Break the loop when stream is complete
      if (done) {
        break;
      }
    }
  } catch (error) {
    console.error("Error in makeApiCall:", error);
  } finally {
    // Clean up: reset loading state and abort controller
    setIsLoading(false);
    setAbortController(null);
  }
};
3

Implementing the UI

For a search application, the following components are required:

  • A search query input box
  • A “Submit” button to initiate the search
  • C1Component for rendering the GenUI response.

You will also need to wrap the C1Component in a ThemeProvider to ensure that the GenUI components are styled correctly.

Here’s an example code block implementing the entire UI, with the important sections highlighted:

app/home/page.tsx
"use client";

import "@crayonai/react-ui/styles/index.css";
import { ThemeProvider, C1Component } from "@thesysai/genui-sdk";
import { useUIState } from "./uiState";
import { Loader } from "./Loader";

export const HomePage = () => {
  const { state, actions } = useUIState();

  return (
    <div className="min-h-screen bg-gray-50 dark:bg-gray-900 py-8 px-4">
      <div className="max-w-[750px] mx-auto space-y-6">
        <div className="flex gap-4 items-center">
          <input
            className="flex-1 px-4 py-2 rounded-lg border border-gray-300 dark:border-gray-600
              bg-white dark:bg-gray-800 text-gray-900 dark:text-gray-100
              focus:outline-none focus:ring-2 focus:ring-blue-500 focus:border-transparent
              placeholder-gray-500 dark:placeholder-gray-400"
            value={state.query}
            placeholder="Enter company name/domain..."
            onChange={({ target: { value } }) => actions.setQuery(value)}
            onKeyDown={(e) => {
              // make api call only when response loading is not in progress
              if (e.key === "Enter" && !state.isLoading) {
                actions.makeApiCall(state.query);
              }
            }}
          />
          <button
            onClick={() => actions.makeApiCall(state.query)}
            disabled={state.query.length === 0 || state.isLoading}
            className="enabled:cursor-pointer px-6 py-2 bg-blue-600 hover:bg-blue-700 text-white font-medium rounded-lg
              transition-colors duration-200 focus:outline-none focus:ring-2 focus:ring-blue-500 focus:ring-offset-2
              disabled:opacity-50 disabled:cursor-not-allowed flex items-center justify-center min-w-[100px]"
          >
            {state.isLoading ? <Loader /> : "Submit"}
          </button>
        </div>

        <div className="max-w-[750px] mx-auto">
          <ThemeProvider mode="dark">
            <C1Component
              c1Response={state.c1Response}
              isStreaming={state.isLoading}
              updateMessage={(message) => actions.setC1Response(message)}
              onAction={({ llmFriendlyMessage }) => {
                if (!state.isLoading) {
                  actions.makeApiCall(llmFriendlyMessage, state.c1Response);
                }
              }}
            />
          </ThemeProvider>
        </div>
      </div>
    </div>
  );
};
4

Test it out!

That’s it! Your search app with C1Component is now complete. Try running it and search for a few companies on the /home route!