If you've used Claude Desktop with file access, or seen "connect to GitHub" buttons in AI tools, you've used MCP without knowing it. Model Context Protocol is an open standard introduced by Anthropic in late 2024 that lets LLM applications talk to external tools through a unified interface. Eighteen months later, it's the dominant integration protocol in the AI tooling space — adopted by Claude, Cursor, Windsurf, Continue, and dozens of agents.
The problem MCP solves
Before MCP, every AI app that wanted to read your Notion, your GitHub, your database, or your local files had to write a custom integration for each. Cursor's GitHub support was different code from Claude Desktop's GitHub support, which was different from Windsurf's. Tool builders (the GitHubs, Notions, Linears) had to write or beg for individual integrations with each AI client.
MCP collapses this into a single protocol:
- MCP servers expose tools, resources, and prompts. One MCP server per data source or service: a GitHub server, a Postgres server, a filesystem server.
- MCP clients are the AI apps. Claude Desktop, Cursor, Windsurf — each speaks the same protocol.
- Any client × any server = working integration. The N×M integration problem becomes N+M.
If the analogy helps: MCP is to AI integrations what USB-C is to chargers. One protocol, plug-and-play, suddenly anything works with anything.
What an MCP server actually exposes
Three things, in protocol terms.
Tools — functions the model can call. Each tool has a name, description, and JSON schema for inputs. A GitHub MCP server might expose list_pull_requests, create_issue, get_file_contents. The model decides when to call them.
Resources — read-only data the model can pull in as context. A filesystem server exposes files; a database server exposes tables. Resources can be browsed by the user ("attach this file to the conversation") or fetched by the model autonomously.
Prompts — pre-built prompt templates the server provides for common workflows. A GitHub server might offer a "summarize this PR" prompt template that the user can invoke from a menu.
Most servers focus on tools — that's where the action is.
What it looks like in practice
Claude Desktop with the filesystem MCP server installed:
- You ask: "What's in my
~/projects/builderworldfolder?" - Claude calls the
read_directorytool from the filesystem MCP server. - Server returns the directory listing.
- Claude reads it back to you, possibly suggests opening a specific file.
- You confirm; Claude calls
read_fileand gets the contents.
From the user's perspective, Claude just "knows" how to read files. From the developer's perspective, the entire filesystem integration is one Python or TypeScript MCP server, ~200 lines of code, that any MCP-aware client can use.
How to use MCP today
If you're a user: most major AI clients support MCP, with one-click installs from a registry. In Claude Desktop, go to settings, browse available MCP servers (filesystem, GitHub, Slack, Postgres, Notion, etc.), install, authorize. Cursor, Windsurf, and most code-focused agents have similar UIs.
If you're a builder: you can write an MCP server for your own product/data. SDKs exist in TypeScript, Python, Rust, and others. Minimum viable server: define a few tools, run a stdio or HTTP transport, register with your client. A simple server is one afternoon's work.
The biggest registry of community servers is at modelcontextprotocol.io and on GitHub. Hundreds exist for everything from databases to design tools.
Why this matters more than it seems
MCP isn't just integration plumbing — it changes incentives.
For tool builders, writing one MCP server gives you compatibility with the entire AI client ecosystem. Notion writes one server, gets used in Claude, Cursor, ChatGPT (eventually), and any future AI tool. That's a much better ROI than per-integration deals.
For AI clients, MCP means you can offer thousands of integrations without writing them. Cursor's value isn't building a Slack integration; it's making a great IDE. MCP lets the integration come from someone else.
For users, MCP means "the AI I use today might gain a new capability tomorrow without an app update" — just install another server.
The meta-effect: a healthy ecosystem with low coupling, where AI clients and tool servers can evolve independently.
When NOT to reach for MCP
- One-off scripts. If you just need to call an API once from your own LLM script, the SDK and a regular HTTP call is faster than spinning up an MCP server.
- Hot paths inside your own product. MCP adds a process boundary and protocol overhead. For features tightly inside your app (e.g., a chat in your SaaS calling your own DB), direct calls are leaner.
- Untrusted servers. Installing a random MCP server gives it tool access in your AI client. Treat MCP servers like browser extensions: install only ones you trust.
Security caveats
A few realistic risks:
- A malicious MCP server could exfiltrate data the AI sees.
- An MCP server that exposes a tool called something innocuous could trick the model into calling it (this overlaps with prompt injection).
- Tool descriptions are part of the prompt — if a server's description is adversarial, it can affect model behavior.
Most MCP clients now have permission UIs ("Allow Claude to use this tool: yes/no"), but the protocol itself doesn't enforce sandboxing. Treat MCP servers with appropriate caution.
Further reading
- What is tool use / function calling
- What is an AI agent
- What is prompt injection (and why it's dangerous)
- Build an agent loop from scratch (no framework)
- How to pick a coding agent