This guide assumes that you have completed the Quickstart.

Tool calling is a feature that allows you to call tools or functions from your agent. This is useful for a variety of use cases, such as:

  • Retrieving information from a database
  • Performing a calculation
  • Performing an API call
  • … and much more!

The following is a step-by-step guide on how to use tool calling. For the purpose of demonstration, this guide implements a web search tool to enable the agent to fetch up-to-date information from the web.

1

Define a tool for the agent to use

First, you need to tell the agent how to use the tool. This is done by defining a tool call function. To implement a web search tool, you can use an API such as Exa. You can create or copy your API key from the Exa dashboard.

Once you have the API key, you can set it in an environment variable. The following code block implements web search using Exa and uses an environment variable named EXA_API_KEY to store the API key.

Note that you may need to install additional dependencies such as zod, zod-to-json-schema and exa-js to use this code. If you use npm for package management, you can install the dependencies using the following command:

npm install zod zod-to-json-schema exa-js
import type {
  RunnableToolFunctionWithParse,
  RunnableToolFunctionWithoutParse,
} from "openai/lib/RunnableFunction.mjs";
import { z } from "zod";
import { zodToJsonSchema } from "zod-to-json-schema";
import Exa from "exa-js";
import type { JSONSchema } from "openai/lib/jsonschema.mjs";

const exa = new Exa(process.env.EXA_API_KEY!);

export const tools: (
  | RunnableToolFunctionWithoutParse
  | RunnableToolFunctionWithParse<{ query: string }>
)[] = [
  {
    type: "function",
    function: {
      name: "webSearch",
      description: "Use this tool to perform a web search.",
      parse: JSON.parse,
      parameters: zodToJsonSchema(
        z.object({
          query: z.string().describe("The query to search for."),
        })
      ) as JSONSchema,
      function: async ({ query }: { query: string }) =>
        await exa.search(query, { numResults: 5 }),
      strict: true,
    },
  },
];
2

Instruct the agent to use the tool

After defining the tool call function, you need to instruct the agent to use the tool. You can do this with a system prompt. The system prompt also allows you to set certain guidelines and rules for the agent to follow, such as how to use the tool call output in its responses.

If you are unfamiliar with system prompts, you can learn more about them in the Using System Prompts guide.

Here’s a sample system prompt:

const systemPrompt = `
You are a helpful assistant that performs web searches using the webSearch tool to provide up-to-date answers.

Rules:
- Always use the webSearch tool for current or web-dependent queries.
- Include a carousel at the end of your response displaying all pages found.
  - Each carousel card must:
    - Use the search result's title.
    - Include the provided image.
    - End with a markdown "Read more" link to the webpage.
`;
3

Pass the tool to the agent

Now you just need to pass the tool call function to the agent so it can start using the tool. If you’ve followed the Quickstart guide, you can pass the tool call function to the agent by making a couple of small changes:

  1. Import the tools and systemPrompt to your route.ts file.
  2. Replace the create call in your route.ts file with a convenient runTools call that takes the list of tools available to the agent.

Here’s an example of how to do this:

// ... all your other imports
import { systemPrompt } from "./systemPrompt";
import { tools } from "./tools";

export async function POST(req: NextRequest) {
  // ... rest of your code

  const llmStream = await client.beta.chat.completions.runTools({
    model: "c1-nightly",
    messages: [
      { role: "system", content: systemPrompt },
      ...messageStore.getOpenAICompatibleMessageList(),
    ],
    tools,
    stream: true,
  });

  // ... rest of your code
}

To view the full route.ts code, you can expand the following codeblock:

import { NextRequest, NextResponse } from "next/server";
import OpenAI from "openai";
import { transformStream } from "@crayonai/stream";
import { DBMessage, getMessageStore } from "./messageStore";
import { systemPrompt } from "./systemPrompt";
import { tools } from "./tools";

export async function POST(req: NextRequest) {
  const { prompt, threadId, responseId } = (await req.json()) as {
    prompt: DBMessage;
    threadId: string;
    responseId: string;
  };
  const client = new OpenAI({
    baseURL: "https://api.thesys.dev/v1/embed/",
    apiKey: process.env.THESYS_API_KEY,
  });
  const messageStore = getMessageStore(threadId);

  messageStore.addMessage(prompt);

  const llmStream = await client.beta.chat.completions.runTools({
    model: "c1-nightly",
    messages: [
      { role: "system", content: systemPrompt },
      ...messageStore.getOpenAICompatibleMessageList(),
    ],
    tools,
    stream: true,
  });

  const responseStream = transformStream(
    llmStream,
    (chunk) => {
      return chunk.choices[0].delta.content;
    },
    {
      onEnd: ({ accumulated }) => {
        const message = accumulated.filter((message) => message).join("");
        messageStore.addMessage({
          role: "assistant",
          content: message,
          id: responseId,
        });
      },
    }
  ) as ReadableStream<string>;

  return new NextResponse(responseStream, {
    headers: {
      "Content-Type": "text/event-stream",
      "Cache-Control": "no-cache, no-transform",
      Connection: "keep-alive",
    },
  });
}
4

Test it out!

You can now test out your agent, newly equipped with a web search tool: