This guide assumes that you have completed the Tool Calling Guide.

The Model Context Protocol (MCP) is an open standard that allows your agents to connect securely to external tools and data sources. Think of MCP as a “universal connector” for AI — it standardizes how language models interact with various systems like databases, APIs, file systems, and custom tools.

MCP transforms your agents from isolated models into powerful assistants that can access real-time data, perform actions, and interact with your entire digital ecosystem through a single, standardized protocol.

1

Setting up the MCP Client

First, let’s install the necessary dependencies to work with MCP in your C1 application.

You’ll need to install the MCP client library and any specific MCP servers you want to use. For this example, we’ll use a filesystem MCP server.

npm install @modelcontextprotocol/sdk @modelcontextprotocol/client-stdio
2

Create an MCP client integration

Now let’s create the MCP client using the @modelcontextprotocol/sdk package. This implementation connects to a filesystem MCP server and handles tool execution.

Create app/api/chat/mcp.ts:

import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js";
import OpenAI from "openai";

export class MCPClient {
  private mcp: Client;
  private transport: StdioClientTransport | null = null;
  public tools: OpenAI.ChatCompletionTool[] = [];

  constructor() {
    this.mcp = new Client({
      name: "c1-chat-mcp-client",
      version: "1.0.0"
    });
  }

  async connect() {
    // Connect to filesystem MCP server (no authentication required)
    const command = "npx";
    const args = [
      "-y",
      "@modelcontextprotocol/server-filesystem@latest",
      process.cwd(),
    ];

    this.transport = new StdioClientTransport({
      command,
      args,
    });

    await this.mcp.connect(this.transport);

    // List available tools from the MCP server
    const toolsResult = await this.mcp.listTools();
    this.tools = toolsResult.tools.map((tool) => ({
      type: "function" as const,
      function: {
          name: tool.name,
          description: tool.description,
          parameters: tool.inputSchema,
      },
    }));
  }

  async runTool({
    tool_call_id,
    name,
    args,
  }: {
    tool_call_id: string;
    name: string;
    args: Record<string, unknown>;
  }) {
    try {
      const result = await this.mcp.callTool({
        name,
        arguments: args,
      });

      return {
        tool_call_id,
        role: "tool" as const,
        content: JSON.stringify(result.content),
      };
    } catch (error) {
      const errorMessage = error instanceof Error ? error.message : "Unknown error";
      return {
        tool_call_id,
        role: "tool" as const,
        content: JSON.stringify({
          error: `Tool call failed: ${errorMessage}`,
        }),
      };
    }
  }

  async disconnect() {
    if (this.transport) {
      await this.transport.close();
    }
  }
}
3

Integrate MCP with your C1 agent

Now let’s update your chat route to use the streamlined MCP integration from the thesysdev examples. This approach uses OpenAI’s runTools method for automatic tool execution.

import { NextRequest, NextResponse } from "next/server";
import OpenAI from "openai";
import { transformStream } from "@crayonai/stream";
import { DBMessage, getMessageStore } from "./messageStore";
import { MCPClient } from "./mcp";
import { JSONSchema } from "openai/lib/jsonschema.mjs";

// Initialize MCP client
const mcpClient = new MCPClient();

interface RequestBody {
  prompt: DBMessage;
  threadId: string;
  responseId: string;
}

async function ensureMCPConnection(): Promise<void> {
  if (mcpClient.tools.length === 0) {
    await mcpClient.connect();
  }
}

export async function POST(req: NextRequest): Promise<NextResponse> {
  const { prompt, threadId, responseId } = (await req.json()) as RequestBody;

  const client = new OpenAI({
    baseURL: "https://api.thesys.dev/v1/embed/",
    apiKey: process.env.THESYS_API_KEY,
  });

  const messageStore = getMessageStore(threadId);
  messageStore.addMessage(prompt);

  // Ensure MCP connection is established
  await ensureMCPConnection();

  const llmStream = await client.beta.chat.completions.runTools({
    model: "c1-nightly",
    messages: messageStore.getOpenAICompatibleMessageList(),
    tools: mcpClient.tools.map((tool) => ({
      type: "function",
      function: {
        name: tool.function.name,
        description: tool.function.description || "",
        parameters: tool.function.parameters as unknown as JSONSchema,
        parse: JSON.parse,
        function: async (args: unknown) => {
          const results = await mcpClient.runTool({
            tool_call_id: tool.function.name + Date.now().toString(),
            name: tool.function.name,
            args: args as Record<string, unknown>,
          });
          return results.content;
        },
      },
    })),
    stream: true,
  });

  const responseStream = transformStream(
    llmStream,
    (chunk) => {
      return chunk.choices[0].delta.content;
    },
    {
      onEnd: ({ accumulated }) => {
        const message = accumulated.filter((message) => message).join("");
        messageStore.addMessage({
          role: "assistant",
          content: message,
          id: responseId,
        });
      },
    }
  ) as ReadableStream<string>;

  return new NextResponse(responseStream, {
    headers: {
      "Content-Type": "text/event-stream",
      "Cache-Control": "no-cache, no-transform",
      Connection: "keep-alive",
    },
  });
}

This implementation uses the @crayonai/stream package for streaming responses. You’ll need to install this dependency:

npm install @crayonai/stream
4

Test your MCP-enabled agent

Your agent now has access to powerful filesystem operations through MCP! You can test it with prompts like:

  • File operations: “Create a new file called ‘notes.txt’ with today’s meeting summary”
  • Directory browsing: “List all the files in the current directory”
  • File reading: “Read the contents of package.json and summarize the project dependencies”
  • File searching: “Find all TypeScript files in the src directory”
mcp-integration

View Source Code

See the full code with integrations for thinking states and error handling.