Model Context Protocol (MCP)
Learn how to use MCP to extend your agents with external tools and data sources
This guide assumes that you have completed the Tool Calling Guide.
The Model Context Protocol (MCP) is an open standard that allows your agents to connect securely to external tools and data sources. Think of MCP as a “universal connector” for AI — it standardizes how language models interact with various systems like databases, APIs, file systems, and custom tools.
MCP transforms your agents from isolated models into powerful assistants that can access real-time data, perform actions, and interact with your entire digital ecosystem through a single, standardized protocol.
Setting up the MCP Client
First, let’s install the necessary dependencies to work with MCP in your C1 application.
You’ll need to install the MCP client library and any specific MCP servers you want to use. For this example, we’ll use a filesystem MCP server.
Create an MCP client integration
Now let’s create the MCP client using the @modelcontextprotocol/sdk
package.
This implementation connects to a filesystem MCP server and handles tool execution.
Create app/api/chat/mcp.ts
:
Integrate MCP with your C1 agent
Now let’s update your chat route to use the streamlined MCP integration from the thesysdev examples. This approach uses OpenAI’s runTools
method for automatic tool execution.
This implementation uses the @crayonai/stream
package for streaming responses. You’ll need to install this dependency:
Test your MCP-enabled agent
Your agent now has access to powerful filesystem operations through MCP! You can test it with prompts like:
- File operations: “Create a new file called ‘notes.txt’ with today’s meeting summary”
- Directory browsing: “List all the files in the current directory”
- File reading: “Read the contents of package.json and summarize the project dependencies”
- File searching: “Find all TypeScript files in the src directory”

View Source Code
See the full code with integrations for thinking states and error handling.