This guide assumes that you have completed the Quickstart.
Tool calling is a feature that allows you to call tools or functions from your agent. This is useful for a variety of use cases, such as:
  • Retrieving information from a database
  • Performing a calculation
  • Performing an API call
  • … and much more!
This guide demonstrates how to equip an agent with a web search tool, enabling it to provide up-to-the-minute information about companies.
1

Define a tool for the agent to use

To get started, we need to define the tool and how the agent should use it. For our company research assistant, a web search tool is crucial for gathering current information. This guide adds a web search tool powered by Tavily to search the web.
You may need to install additional dependencies such as zod, zod-to-json-schema, and @tavily/core. You can install them using npm:
npm install zod zod-to-json-schema @tavily/core
import { JSONSchema } from "openai/lib/jsonschema.mjs";
import { RunnableToolFunctionWithParse } from "openai/lib/RunnableFunction.mjs";
import { z } from "zod";
import { zodToJsonSchema } from "zod-to-json-schema";
import { tavily } from "@tavily/core";

const tavilyClient = tavily({ apiKey: process.env.TAVILY_API_KEY });

export const tools: [
  RunnableToolFunctionWithParse<{
    searchQuery: string;
  }>
] = [
  {
    type: "function",
    function: {
      name: "web_search",
      description:
        "Search the web for a given query, will return details about anything including business",
      parse: (input) => {
        return JSON.parse(input) as { searchQuery: string };
      },
      parameters: zodToJsonSchema(
        z.object({
          searchQuery: z.string().describe("search query"),
        })
      ) as JSONSchema,
      function: async ({ searchQuery }: { searchQuery: string }) => {
        const results = await tavilyClient.search(searchQuery, {
          maxResults: 5,
        });

        return JSON.stringify(results);
      },
      strict: true,
    },
  },
];
2

Instruct the agent to use the tool

Now that the agent has a tool, we need to teach it when and how to use it. A system prompt is the perfect way to provide these instructions. We can tell the agent to use the web_search tool whenever it needs current information to answer a question about a company.
If you are unfamiliar with system prompts, you can learn more about them in the Using System Prompts guide.
Here’s a sample system prompt:
app/api/chat/systemPrompt.ts
export const systemPrompt = `
  You are a business research assistant just like crunchbase. You answer questions about a company or domain.
  given a company name or domain, you will search the web for the latest information.

  At the end of your response, add a form with single input field to ask for follow up questions.
`;
3

Pass the tool to the agent

Now you just need to pass the tool call function to the agent so it can start using the tool. If you’ve followed the Quickstart guide, you can pass the tool call function to the agent by making a couple of small changes:
  1. Import the tools and systemPrompt to your route.ts file.
  2. Replace the create call in your route.ts file with a convenient runTools call that takes the list of tools available to the agent.
Here’s an example of how to do this:
// ... all your other imports
import { systemPrompt } from "./systemPrompt";
import { tools } from "./tools";

export async function POST(req: NextRequest) {
  // ... rest of your code

  const llmStream = await client.beta.chat.completions.runTools({
    model: "c1/anthropic/claude-sonnet-4/v-20250617",
    messages: [
      { role: "system", content: systemPrompt },
      ...((previousC1Response
        ? [{ role: "assistant", content: previousC1Response }]
        : []) as ChatCompletionMessageParam[]),

      { role: "user", content: prompt },
      ...messageStore.getOpenAICompatibleMessageList(),
    ],
    tools,
    stream: true,
  });

  // ... rest of your code
}
To view the full route.ts code, you can expand the following codeblock:
import { NextRequest } from "next/server";
import OpenAI from "openai";
import { ChatCompletionMessageParam } from "openai/resources/chat/completions";
import { transformStream } from "@crayonai/stream";
import { tools } from "./tools";
import { systemPrompt } from "./systemPrompt";

const client = new OpenAI({
  baseURL: "https://api.thesys.dev/v1/embed",
  apiKey: process.env.THESYS_API_KEY,
});

export async function POST(req: NextRequest) {
  const { prompt, previousC1Response } = (await req.json()) as {
    prompt: string;
    previousC1Response?: string;
  };

  const runToolsResponse = client.beta.chat.completions.runTools({
    model: "c1/anthropic/claude-sonnet-4/v-20250617",
    messages: [
      {
        role: "system",
        content: systemPrompt,
      },
      ...((previousC1Response
        ? [{ role: "assistant", content: previousC1Response }]
        : []) as ChatCompletionMessageParam[]),

      { role: "user", content: prompt },
    ],
    stream: true,
    tools: tools,
  });

  const llmStream = await runToolsResponse;

  const responseStream = transformStream(llmStream, (chunk) => {
    return chunk.choices[0]?.delta?.content || "";
  });

  return new Response(responseStream as ReadableStream, {
    headers: {
      "Content-Type": "text/event-stream",
      "Cache-Control": "no-cache, no-transform",
      Connection: "keep-alive",
    },
  });
}
4

Test it out!

Your company research assistant is now equipped with a powerful web search tool. Try asking it for recent news about a company to see it in action.
Company research agent with tool calling