This guide assumes that you have completed the Quickstart.
System prompts are a powerful way to customize the behavior of your AI applications. They are used to provide instructions to the agent on what UI components to use, how to behave, the tone / language to use, and more. The following is a step-by-step guide on how to add a system prompt to your AI application. For the purpose of demonstration, this guide builds a company research agent.
1

Write some rules for the agent

You can give the agent rules that it must follow. For example, you can instruct the agent to use a specific UI component for a certain type of question. Here’s a demonstration of how each rule affects the agent’s output:
RuleOutput

Use tables to show structured data, such as financial highlights or key executives.


Use graphs to visualize quantitative information, like stock performance or revenue growth.


Use carousels to show information about products from the company.

2

Write the system prompt

You can use the rules you’ve prepared in the previous step to write a system prompt. You can also instruct the agent to play a specific role or use a specific tone / language using this prompt.
systemPrompt.ts
export const systemPrompt = `
You are a business research assistant. Your goal is to provide concise and
accurate information about companies. When responding, follow these rules:

Rules:
- Use tables to show structured data such as financial highlights, key executives, or product lists.
- Use graphs to visualize quantitative information like stock performance or revenue growth.
- Use carousels to show information about products from the company.
`;
3

Add the system prompt to the agent

You can add the system prompt to your message history when passing it to the SDK to ensure that it is always the first message of any conversation.If you’ve used the quickstart template, you will just need to make the highlighted changes to start using the system prompt.
// ... all your imports
import { systemPrompt } from "./systemPrompt";

export async function POST(req: NextRequest) {
  // ... existing code

  const runToolsResponse = client.beta.chat.completions.runTools({
    model: "c1/anthropic/claude-3.5-sonnet/v-20250617", // available models: https://docs.thesys.dev/guides/models-pricing#model-table
    messages: [
      {
        role: "system",
        content: systemPrompt,
      },
      ...messages,
    ],
    stream: true,
    tools: tools,
  });

  // ... existing code
}
You can view the full code by expanding the following code block:
import { NextRequest } from "next/server";
import OpenAI from "openai";
import { ChatCompletionMessageParam } from "openai/resources/chat/completions";
import { transformStream } from "@crayonai/stream";
import { tools } from "./tools";
import { systemPrompt } from "./systemPrompt";

const client = new OpenAI({
  baseURL: "https://api.thesys.dev/v1/embed",
  apiKey: process.env.THESYS_API_KEY,
});

export async function POST(req: NextRequest) {
  const { prompt, previousC1Response } = (await req.json()) as {
    prompt: string;
    previousC1Response?: string;
  };

  const messages: ChatCompletionMessageParam[] = [];

  if (previousC1Response) {
    messages.push({
      role: "assistant",
      content: previousC1Response,
    });
  }

  messages.push({
    role: "user",
    content: prompt,
  });

  const runToolsResponse = client.beta.chat.completions.runTools({
    model: "c1/anthropic/claude-3.5-sonnet/v-20250617", // available models: https://docs.thesys.dev/guides/models-pricing#model-table
    messages: [
      {
        role: "system",
        content: systemPrompt,
      },
      ...messages,
    ],
    stream: true,
    tools: tools,
  });

  const llmStream = await runToolsResponse;

  const responseStream = transformStream(llmStream, (chunk) => {
    return chunk.choices[0]?.delta?.content || "";
  });

  return new Response(responseStream as ReadableStream, {
    headers: {
      "Content-Type": "text/event-stream",
      "Cache-Control": "no-cache, no-transform",
      Connection: "keep-alive",
    },
  });
}
For detailed information on tool calling, including what it is and how to use it, you can refer to the Tool Calling guide.
4

Test it out!

Test out your company research agent by asking it a few questions.