What is the Model Context Protocol? A developer's field guide
MCP is Anthropic's open protocol that lets AI tools plug into external data and actions. Here's a plain-English tour: what it is, how it differs from tool-use, and why every major IDE adopted it in under a year.
If you've spent any time with AI coding tools in the past year, you've seen the acronym MCP appear everywhere. Claude supports it. Cursor supports it. VS Code's agent mode supports it. Every week there's a new MCP server announcement on Hacker News.
But the explanations tend to be either too abstract (“a standard for connecting AI to tools”) or too deep in protocol details to be immediately useful. This post is the middle version: enough to understand what MCP actually is, why it exists, and what the ecosystem looks like today.
Origin: Anthropic, November 2024
Anthropic open-sourced the Model Context Protocol on November 25, 2024. (Source: anthropic.com/news/model-context-protocol) The announcement described it as “a universal, open standard for connecting AI systems with data sources.”
This was not a product launch. Anthropic released a protocol specification, SDKs in Python and TypeScript, and a handful of reference server implementations for Google Drive, Slack, GitHub, Postgres, and Puppeteer. The intention was explicit: they wanted other AI tool builders and service providers to adopt it, not just Claude.
In December 2025, Anthropic donated MCP to the Agentic AI Foundation (AAIF), a directed fund under the Linux Foundation co-founded by Anthropic, Block, and OpenAI. The protocol is now governed independently of any single company.
The problem MCP solves
Before MCP, every AI tool that wanted to integrate with an external service had to build a custom integration. GitHub wanted Claude to read repositories? Anthropic builds a GitHub integration. GitHub also wants Cursor to read repositories? Cursor builds a separate GitHub integration. And so on, for every tool-service pair.
This is the M times N problem. M AI clients times N external services equals M*N custom integrations, each maintained separately, each implemented differently. The Language Server Protocol solved the same problem for editor-language pairs in 2016 (one server per language, not one per editor-language combination). MCP applies the same insight to AI tool-service pairs.
With MCP: GitHub builds one server. Every MCP client gets GitHub integration for free. Supabase builds one server. Same. Linear, Sentry, PostHog, Playwright: one server each, works everywhere.
Architecture: clients, servers, and the protocol
MCP has two sides: clients and servers.
An MCP client is an AI tool that wants to use external capabilities: Claude Desktop, Cursor, VS Code, Zed, etc. Clients initiate connections, discover what a server can do, and make requests.
An MCP server is a program that exposes capabilities to clients: the Supabase server, the GitHub server, the Filesystem server, etc. Servers respond to requests and return structured results.
Between them runs the Model Context Protocol itself: JSON-RPC 2.0 messages, either over standard I/O (for local servers, launched as child processes) or over HTTP/SSE (for remote servers, accessed over the network).
AI Tool (Client)
│
│ JSON-RPC 2.0
│ over stdio (local) or HTTP/SSE (remote)
▼
MCP Server
│
▼
External Service (GitHub, Supabase, filesystem, etc.)When a client connects to a server, the first thing it does is capability negotiation: the server announces what it can do, the client learns the schema, and from that point forward the client can call the server's tools with structured arguments and get structured results back.
What a server exposes: tools, resources, and prompts
MCP servers can expose three kinds of things:
Tools are callable functions with defined input schemas. A GitHub server might expose a create_issue tool that takes repo, title, and body arguments. The AI calls the tool, the server creates the issue, the server returns the result. This is the most common thing servers expose and what most developers think of when they talk about MCP.
Resources are readable data sources: files, database records, API responses. The client asks for a resource by URI and gets structured content back. A Filesystem server might expose each file in a directory as a resource.
Prompts are reusable prompt templates that servers can define and clients can invoke. Less common than tools, but useful for servers that want to guide how the AI approaches a task.
Who adopted it and when
Adoption after the November 2024 launch was faster than most protocol launches manage in their first year.
| Tool | MCP support added |
|---|---|
| Claude Desktop | November 2024 (launch day) |
| Cursor | Early 2025 |
| VS Code (Copilot agent mode) | 2025, native support |
| Windsurf | 2025 |
| Zed | 2025 (experimental, actively improving) |
| Claude Code (CLI) | 2025 |
| Codex CLI (OpenAI) | 2025 |
| Gemini CLI (Google) | 2025 |
| Continue | 2025 |
| Roo Code | 2025 |
OpenAI announced MCP support across ChatGPT desktop and Codex in early 2025. Google added MCP support to Gemini CLI. By the end of 2025, every major AI coding tool either supported MCP or had it on a near-term roadmap.
Why it matters: the network effect
Once you build a server, it works in every client that speaks the protocol. The Supabase team built their MCP server once. Now it works in Claude Desktop, Cursor, VS Code, Windsurf, Zed, and any other tool that adopts MCP. For free. Without Supabase doing any additional work per client.
As of early 2026, registries like Glama index over 21,000 MCP servers, Smithery lists over 7,000, and MCP.so hosts nearly 20,000 community-submitted entries. (Source: automationswitch.com/ai-workflows/where-to-find-mcp-servers-2026) That number is growing faster than almost any comparable developer ecosystem in recent memory. Every new client that adopts MCP immediately gets access to every server that already exists. Every new server immediately works in every client that already supports it.
That's the network effect that made MCP go from a spec document to a de facto industry standard in under twelve months.
The remaining friction: installation
MCP solved the integration problem (one server, many clients) but left the installation problem open. Each client stores MCP server configuration in a different file, with a different schema. Connecting one server to five tools still requires translating the same config five times manually.
That's the gap MCPBolt fills. If you're using more than one MCP-capable tool, download MCPBolt and paste your server configs once instead of translating them for each tool separately.