Generative UI is most useful when the model can call into your data and services. C1 supports tool integration, so that your UIs are powered by live data rather than raw LLM responses.

What are tools?

A tool is an API or function you expose to the model. Instead of guessing/hallucinating values, the model can call your tool and use the results to generate UI. Examples of tools:
  • Fetching live stock prices from a finance API
  • Querying a database for users or orders
  • Calling an internal microservice
  • Running a calculation or simulation

How tools fit into the flow

Tool calling / Function calling behaves in the same way as any standardized LLM endpoint. To learn more in depth refer to the OpenAI Guide on how it works. This guide demonstrates how to equip an agent with a web search tool, enabling it to provide up-to-the-minute information.

1. Define a tool for the agent to use

To get started, we need to define the tool and how the agent should use it. For our company’s research assistant, a web search tool is crucial for gathering current information. This guide adds a web search tool powered by Tavily to search the web. You may need to install additional dependencies such as zod, zod-to-json-schema, and @tavily/core. You can install them using npm:
cli
> npm install zod zod-to-json-schema @tavily/core
app/api/chat/tools.ts
import { JSONSchema } from "openai/lib/jsonschema.mjs";
import { RunnableToolFunctionWithParse } from "openai/lib/RunnableFunction.mjs";
import { z } from "zod";
import { zodToJsonSchema } from "zod-to-json-schema";
import { tavily } from "@tavily/core";

const tavilyClient = tavily({ apiKey: process.env.TAVILY_API_KEY });

export const tools: [
  RunnableToolFunctionWithParse<{
    searchQuery: string;
  }>
] = [
  {
    type: "function",
    function: {
      name: "web_search",
      description:
        "Search the web for a given query, will return details about anything including business",
      parse: (input) => {
        return JSON.parse(input) as { searchQuery: string };
      },
      parameters: zodToJsonSchema(
        z.object({
          searchQuery: z.string().describe("search query"),
        })
      ) as JSONSchema,
      function: async ({ searchQuery }: { searchQuery: string }) => {
        const results = await tavilyClient.search(searchQuery, {
          maxResults: 5,
        });

        return JSON.stringify(results);
      },
      strict: true,
    },
  },
];

2. Instruct the agent to use the tool

Now that the agent has a tool, we need to teach it when and how to use it. A system prompt is the perfect way to provide these instructions. We can tell the agent to use the web_search tool whenever it needs current information to answer a question about a company. Here’s a sample system prompt:
app/api/chat/systemPrompt.ts
export const systemPrompt = `
  You are a business research assistant just like crunchbase. You answer questions about a company or domain.
  given a company name or domain, you will search the web for the latest information.`;

3. Pass the tool to the agent

Now you just need to pass the tool call function to the agent so it can start using the tool. If you’ve followed the Quickstart guide, you can pass the tool call function to the agent by making a couple of small changes:
  1. Import the tools and systemPrompt to your route.ts file.
  2. Replace the create call in your route.ts file with a convenient runTools call that takes the list of tools available to the agent.
Here’s an example of how to do this:
src/app/api/route.ts
import { systemPrompt } from "./systemPrompt";
import { tools } from "./tools";

export async function POST(req: NextRequest) {
  ...
  const llmStream = await client.beta.chat.completions.runTools({
    model: "c1/anthropic/claude-sonnet-4/v-20250617",
    messages: [
      { role: "system", content: systemPrompt },
      ...messages
    ],
    tools,
    stream: true,
  });
  ...
}

4. Test it out

Try asking “Who is the current president of United States?”