what is MCP? the Model Context Protocol explained

Table of content

by Ray Svitla


MCP stands for Model Context Protocol. it’s an open protocol that lets AI models talk to external tools and data sources through a standard interface. Anthropic created it. everyone else is adopting it. it matters more than most people realize.

the short version: MCP is USB for AI. before USB, every device had its own proprietary connector. after USB, everything just plugs in. MCP does the same thing for AI tools.


the problem MCP solves

without MCP, every AI tool builds its own integrations from scratch. Claude wants to search the web? someone writes a web search function inside Claude. ChatGPT wants to read a database? OpenAI builds a database plugin. Cursor wants to access GitHub? they write GitHub API calls.

multiply this by every AI tool and every service people want to connect, and you get a combinatorial explosion of custom integrations. n tools times m services equals n×m bespoke connections. none of them compatible with each other.

MCP collapses this to n+m. each AI tool implements the MCP client protocol once. each service implements an MCP server once. everything works with everything.


how it works (without the jargon)

MCP has three pieces:

the client — your AI tool. Claude Code, Cursor, whatever. it knows how to ask questions through the protocol.

the server — a small program that connects to a specific service. there’s an MCP server for GitHub, one for Slack, one for PostgreSQL, one for web search. each server translates MCP requests into the service’s native language.

the protocol — the agreed-upon format for how clients and servers communicate. what tools are available? what parameters do they accept? what comes back?

┌──────────────┐     MCP Protocol     ┌──────────────┐
│              │◄────────────────────►│              │
│  Claude Code │     (JSON-RPC)       │  MCP Server  │──► GitHub
│  (client)    │                      │              │──► Slack  
│              │                      │              │──► Database
└──────────────┘                      └──────────────┘

when Claude Code needs to create a GitHub PR, it doesn’t call the GitHub API directly. it asks the GitHub MCP server “create a pull request with this title and description.” the server handles the API details. Claude Code never touches the GitHub API.


why this matters

for users

you install MCP servers like apps. want Claude Code to access your Notion? install the Notion MCP server. want web search? install the search server. each one gives Claude new abilities without changing Claude itself.

claude mcp add github -- npx -y @anthropic-ai/mcp-server-github
claude mcp add notion -- npx -y @anthropic-ai/mcp-server-notion

two commands. Claude Code can now manage your GitHub repos and Notion pages. see best MCP servers for the full catalog.

for developers

write one MCP server for your service, and every MCP-compatible AI tool can use it. you don’t need to build separate plugins for Claude, ChatGPT, Cursor, and whatever comes next. one implementation, universal compatibility.

the building MCP servers guide covers how.

for the ecosystem

MCP creates a shared infrastructure layer. tools compete on quality, not on how many integrations they’ve built. services get AI-accessible by implementing one standard, not twenty proprietary plugins.


what MCP exposes

an MCP server can provide three types of capabilities:

tools — actions the AI can take. “create a file,” “search the web,” “send a message.” tools have defined inputs and outputs.

resources — data the AI can read. database contents, file systems, API responses. resources are like read-only data sources.

prompts — pre-written prompt templates the server provides. less common, but useful for specialized workflows.

most MCP servers focus on tools. the GitHub server gives Claude tools like “create_pull_request,” “search_code,” “list_issues.” the Slack server gives “send_message,” “search_messages,” “list_channels.”


the ecosystem (early 2026)

the ecosystem is growing fast. rough categories:

categoryexamples
search & webBrave Search, Exa, web fetch
code & DevOpsGitHub, GitLab, Docker, Kubernetes
databasesPostgreSQL, SQLite, Supabase, MongoDB
communicationSlack, Gmail, Discord
productivityNotion, Google Drive, Obsidian, Linear
cloudAWS, GCP, Terraform
monitoringPrometheus, Sentry, PagerDuty
browserPlaywright, Browserbase
memoryMem0, Qdrant

Anthropic maintains official reference servers. the community builds the rest. quality varies — some are production-ready, some are weekend projects. see the MCP server stacking guide for managing multiple servers.


the security question

MCP servers run locally on your machine (usually). they have whatever permissions you give them. the GitHub server needs a GitHub token. the AWS server needs AWS credentials. the Slack server can post as you.

this is powerful and dangerous in exactly equal measure. every MCP server you add is a new attack surface. a compromised server could exfiltrate your tokens, modify your repos, or message your colleagues. the protocol itself is secure; the implementations are as trustworthy as their authors.

read the sandboxing and security guide before connecting anything sensitive.


MCP vs function calling

if you’ve used OpenAI’s function calling or Claude’s tool use, MCP might sound familiar. the difference: function calling is built into a specific model’s API. MCP is a protocol that works across any model and any tool.

function calling: “Claude, here’s a function definition, call it when relevant.” MCP: “Claude, here’s a server with 15 tools, discover them dynamically, call them through a standard protocol.”

MCP is the generalization. function calling is a model-specific implementation detail.


where this is going

MCP is less than two years old and already supported by Claude Code, Cursor, Windsurf, Copilot (in VS Code), and a growing list of AI tools. the server ecosystem is expanding weekly.

the bet: MCP becomes the standard way AI connects to everything. like HTTP for web pages, SMTP for email, MCP for AI tool access.

whether that bet pays off depends on whether competing protocols emerge (they will) and whether MCP maintains enough momentum to become the default (it might). for now, it’s the most widely adopted option, and writing for it is the safest infrastructure investment you can make.

but here’s the takeaway that survives regardless of whether MCP wins: the pattern of standardized tool interfaces is inevitable. if not MCP, something else with the same architecture. the principle — decouple the AI from the tools it uses through a standard protocol — is sound engineering regardless of which protocol wins. understanding why MCP exists matters more than memorizing its API. protocols change. the problems they solve don’t.


further reading

best MCP servers — what to install → building MCP servers — how to create your own → MCP server stacking — managing multiple servers → context engineering — MCP as part of the bigger picture


Ray Svitla stay evolving

Topics: mcp protocol architecture ai-agents