David Soria Parra's Model Context Protocol

Table of content
David Soria Parra's Model Context Protocol

David Soria Parra co-created the Model Context Protocol while working at Anthropic, starting from a simple annoyance: constantly switching between Claude Desktop and his code editor to copy-paste context. That frustration became an open standard now supported by ChatGPT, Cursor, Gemini, and VS Code, with over 97 million monthly SDK downloads.

Background

GitHub | Twitter | Site | MCP Docs

MCP Origins

The protocol started at an Anthropic hackathon in 2024. Soria Parra and Justin Spahr-Summers wanted to solve what he calls “the M times N problem”: multiple AI clients needing to connect to multiple data providers. Without a standard, every integration requires custom work.

“MCP tries to enable building AI applications in such a way that they can be extended by everyone.”

Think USB-C: one connector instead of a different cable for every device. MCP provides:

ComponentPurpose
ToolsFunctions the AI can call (file operations, API requests)
ResourcesData the AI can read (files, database schemas)
PromptsReusable prompt templates servers can expose
SamplingServers can request LLM completions back from the client

Architecture

MCP uses a client-server model where the AI host connects to multiple servers, each exposing capabilities:

{
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/files"]
    },
    "github": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-github"],
      "env": {
        "GITHUB_TOKEN": "your-token"
      }
    }
  }
}

The protocol supports stdio (local processes) and HTTP with Server-Sent Events (remote servers). Claude Desktop, Cursor, and VS Code all support MCP as clients.

M x N Problem

Before MCP:

With MCP:

This is why adoption spread fast. OpenAI, Microsoft, Google, Docker, and GitHub all added support. Building one MCP server makes your tool accessible to every MCP-supporting AI client.

Underused Features

Soria Parra notes that most developers only use tools, ignoring other primitives:

FeatureWhat It Does
SamplingServer requests LLM completions from the client (multi-agent patterns)
ResourcesExpose readable data with URIs and MIME types
PromptsServer-provided prompt templates users can invoke
NotificationsReal-time updates between client and server

Sampling is the interesting one. An MCP server can request LLM completions from the client, so your server could analyze data, ask Claude for a summary, then return structured results. Multi-agent patterns without managing API keys on the server side.

Open Source Governance

In December 2025, Anthropic donated MCP to the Agentic AI Foundation under the Linux Foundation. Co-founded with Block and OpenAI, the foundation also hosts goose (Block’s AI agent) and AGENTS.md (OpenAI’s agent configuration standard).

Soria Parra remains Lead Core Maintainer. The donation means:

Key Takeaways

PrincipleImplementation
Solve your own problemCopy-paste frustration became a protocol
Protocol over productsM x N reduction from standardization
Open source from day oneCommunity contributions shaped the spec
Underutilized features matterSampling, resources, prompts need documentation

Next: Simon Willison’s LLM Workflow

Topics: mcp open-source claude-code automation