Chapter 12 · MCP — The Model Context Protocol

The USB-C of AI.


"One protocol to plug them all." — The MCP pitch, shortened.

For most of 2023 and 2024, every AI client invented its own plugin format. ChatGPT had Plugins. Cursor had its own integrations. Claude Desktop had none. If you wrote a tool that let an LLM hit your internal API, you had to rewrite it for each client.

In November 2024, Anthropic released the Model Context Protocol (MCP) — an open, model-agnostic standard for exposing tools, resources, and prompts to any LLM client. By late 2025, OpenAI, Google, and Microsoft had all announced support. By 2026, it is the default plugin surface for agentic apps, the same way LSP became the default for editors.

If you learn one protocol in this handbook, learn this one.

In plain English. MCP is a standard shape for "tool boxes" that any AI app can pick up. Build your tool box once, and Claude Desktop, Cursor, Cowork, ChatGPT, or your own app can all use it.

An MCP-powered session in flight

sequenceDiagram
    autonumber
    participant U as User
    participant C as MCP Client (Claude Desktop)
    participant M as Model
    participant S1 as MCP Server: GitHub
    participant S2 as MCP Server: Postgres
    U->>C: "Summarize PRs that touched the orders table this week"
    C->>M: prompt + tool list + resources
    M-->>C: tool_call list_prs(since=7d)
    C->>S1: JSON-RPC list_prs
    S1-->>C: 14 PRs
    M-->>C: tool_call query(orders schema recent refs)
    C->>S2: SQL
    S2-->>C: rows
    M-->>C: "Here's a summary with links."
    C-->>U: Final answer

12.1 The shape of the problem MCP solves

flowchart LR
    subgraph Before MCP
    C1[Claude Desktop] --> P1[Plugin A]
    C2[Cursor] --> P2[Plugin A']
    C3[ChatGPT] --> P3[Plugin A'']
    C4[Your app] --> P4[Plugin A''']
    end
    subgraph After MCP
    D1[Claude Desktop] --> M[MCP Server]
    D2[Cursor] --> M
    D3[ChatGPT] --> M
    D4[Your app] --> M
    M --> S[Your system]
    end

One server, every client. The equivalent leverage to "one REST API, every language."

12.2 The three primitives

An MCP server exposes three kinds of things to a client:

  1. Tools — functions the model can call. Exactly like tool-use from Chapter 10, but discovered dynamically by the client.
  2. Resources — read-only data the model (or client) can fetch: files, DB rows, API responses, configs. Identified by a URI.
  3. Prompts — pre-canned prompt templates that users can invoke (e.g., "summarize this ticket," "draft an RCA").
flowchart TB
    subgraph MCP Server
    T[Tools]
    R[Resources]
    P[Prompts]
    end
    C[MCP Client] -->|list, call| T
    C -->|list, read| R
    C -->|list, render| P

12.3 Transport

MCP is JSON-RPC 2.0 over three possible transports:

Most community servers ship stdio binaries; most enterprise deployments use HTTP.

12.4 A minimal MCP server, in Python

from mcp.server.fastmcp import FastMCP
import requests

mcp = FastMCP("weather")

@mcp.tool()
def get_forecast(city: str, days: int = 3) -> dict:
    """Return the N-day forecast for a city. City can be a name or lat,lon."""
    r = requests.get(
        "https://api.open-meteo.com/v1/forecast",
        params={"location": city, "days": days},
        timeout=5,
    )
    return r.json()

@mcp.resource("weather://cached/{city}")
def cached(city: str) -> str:
    """Return the last cached forecast for a city."""
    return cache.get(city, "no cached forecast")

if __name__ == "__main__":
    mcp.run()

Drop that into a shell, register it with Claude Desktop or Cursor, and the model can now call get_forecast like any native tool. That's the whole demo.

12.5 Why MCP matters more than it looks

A few things to notice:

12.6 The public registry

By 2026, the MCP registry contains thousands of community servers. The frequently useful ones:

Plug in three or four of these and your agent has roughly the reach of a junior operations engineer.

12.7 Building your own MCP server

For a backend engineer, the highest-leverage weekend project in 2026 is this:

  1. Pick the 3–5 internal systems you touch most (service registry, staging DB, feature flags, deploy tool, incident ledger).
  2. Wrap each in an MCP tool. Read-only at first.
  3. Deploy as an HTTP MCP server inside your VPC.
  4. Register it with Claude Desktop, Cursor, and your team's agents.

Now your AI tools can see and reason about your company's actual systems. The productivity delta is larger than it sounds.

flowchart LR
    subgraph mcpCo[Your company's MCP server]
    T1[search_services]
    T2[get_deploy_status]
    T3[query_staging_db_ro]
    T4[list_feature_flags]
    T5[get_incident]
    end
    C[Claude Desktop / Cursor / Cowork / Your app]
    C <--> T1
    C <--> T2
    C <--> T3
    C <--> T4
    C <--> T5
    T1 --> R1[Service registry]
    T2 --> R2[Deployment system]
    T3 --> R3[(Staging DB)]
    T4 --> R4[Feature flag service]
    T5 --> R5[PagerDuty / Incident.io]

12.8 Security considerations

MCP is powerful because it exposes real systems to a model. Which means the same practices from Chapter 10 apply, more strongly:

12.9 MCP vs "old-school" plugins — why this wins

12.10 Looking ahead

Emerging areas in the MCP ecosystem:

If the 2010s were about APIs, the late 2020s will be about MCP-style agent-facing interfaces. Writing them well is a durable skill.

Further reading & watching