MCP + A2A: The Two Protocols Quietly Rewiring How AI Agents Work

B

Bhupin Baral

April 28, 2026

mlopsagent-protocol

A practitioner’s breakdown of the agent infrastructure most teams are ignoring — and why that’s about to be expensive.

Most AI agent demos look impressive in a 30-second video.

Then someone tries to put two of them in production together, and the whole thing collapses into a swamp of custom HTTP calls, brittle JSON contracts, and Slack messages that start with “wait, why did the writer agent overwrite the research agent’s output again?”

This is not a model problem. It’s a protocol problem.

And in the last 12 months, two open standards have emerged to fix it: MCP (Model Context Protocol) from Anthropic, and A2A (Agent2Agent) from Google.

If you’re a founder, CTO, or engineer planning anything beyond a single-agent chatbot, these are not optional reading. They are quickly becoming the TCP/IP of the agent era — boring, foundational, and the difference between a system that scales and one that turns into a maintenance black hole.

Let’s break down what each one actually does, where they stop, and how they fit together.

The problem: agents are islands

A modern AI agent needs to do three things:

  1. Talk to tools — read a database, call an API, search the filesystem, run a query
  2. Talk to data — local files, vector stores, internal documents
  3. Talk to other agents — delegate, hand off work, share state

Until 2024, every team built all three of these from scratch. Custom function-calling schemas. Custom tool registries. Custom inter-agent message formats. Every new tool was a new integration. Every new agent was a new contract to maintain.

This is the “M × N” problem: M agents × N tools = a quadratic mess of glue code.

MCP solves the first two. A2A solves the third. Together, they collapse the integration matrix.

Part 1: MCP — Agent ↔ Tools

Anthropic introduced MCP in late 2024 as an open protocol for connecting AI applications to tools and data sources. The pitch was simple: stop writing one-off integrations. Build once, connect everywhere.

The growth tells the story.

According to AgentSeal’s 2025 registry scan, MCP went from 714 servers in January 2025 to over 16,000 today. Anthropic’s own count puts it at 10,000+ active public servers, with adoption across ChatGPT, Cursor, Gemini, Microsoft Copilot, and Visual Studio Code. In December 2025, Anthropic donated MCP to the Linux Foundation’s Agentic AI Foundation, ensuring it stays vendor-neutral.

That’s not hype velocity. That’s infrastructure velocity.

How MCP actually works

The protocol has four moving pieces:

  • Host — the LLM application (Claude Desktop, Cursor, your custom agent) that wants to access external capabilities
  • Client — a 1:1 connection inside the host that talks to a single MCP server
  • Server — a lightweight program that exposes a specific capability (filesystem access, GitHub, Postgres, your internal API)
  • Data sources — the actual things the server fronts: local files, databases, remote APIs

When you build an MCP server for, say, your company’s customer database, any MCP-compatible agent — Claude, a custom agent, a third-party tool — can use it without you writing a custom integration each time.

That’s the breakthrough. The N × M integration problem becomes N + M.

What this means for a startup

If you’re building AI features into an existing product:

  • You write one MCP server for your data layer
  • Any agent your team builds (or buys) can plug into it
  • You don’t rewrite the integration when you switch from one model provider to another
  • Your data security model lives in one place, not scattered across a dozen ad-hoc API wrappers

For founders worried about being locked into OpenAI or Anthropic, MCP is the most underrated piece of leverage you have. It decouples your data layer from your model layer.

Part 2: Where MCP stops

Here’s the thing nobody put on the slide deck when MCP launched: MCP only solves the agent-to-tool problem.

It does not solve the agent-to-agent problem.

If you have one agent that researches, one that writes, and one that reviews, MCP does nothing to help them coordinate. They can each independently call tools through MCP. But they can’t:

  • Discover each other’s capabilities
  • Negotiate who handles what
  • Share task state across a long-running workflow
  • Authenticate and trust each other across organizational boundaries

For a single-agent system, this doesn’t matter. For a multi-agent system — which is where serious AI workloads are heading — it matters enormously.

This is the gap A2A was built to fill.

Part 3: A2A — Agent ↔ Agent

Google announced the Agent2Agent protocol on April 9, 2025, with 50+ launch partners including Atlassian, Box, Salesforce, SAP, ServiceNow, and Workday. By mid-2025, that number had grown to 150+ organizations. In June 2025, Google donated A2A to the Linux Foundation, where it now lives as a vendor-neutral open protocol.

By Google Cloud Next 2026, A2A had reached version 1.2 and was running in production at companies like Microsoft, AWS, Salesforce, SAP, and ServiceNow.

A2A solves four specific problems MCP doesn’t touch:

1. Secure collaboration

Agents need to authenticate each other across organizational and platform boundaries. A2A uses signed agent cards — JSON files that describe an agent’s identity, capabilities, and authentication requirements. By v1.2, these are cryptographically signed for domain verification.

This matters because in any real multi-agent workflow, you’re going to have agents from different vendors, different teams, and sometimes different companies talking to each other. Without a trust layer, you’re back to writing custom auth glue.

2. Task and state management

A2A handles long-running tasks with state that persists across multiple agent interactions. If your research agent kicks off a 20-minute deep search and your writer agent needs to check in periodically, A2A standardizes how that conversation works.

This is what’s missing from naive function-calling chains. A function call is a single request-response. A2A treats agent collaboration as a stateful conversation.

3. UX negotiation

When two agents communicate, they need to agree on how to present results — text, structured data, an artifact, a streaming update. A2A includes a negotiation step where agents declare what formats they can produce and consume.

In practice, this is what stops your reviewer agent from getting back markdown when it expected JSON.

4. Capability discovery

Agents publish what they can do via Agent Cards. Other agents can query these to find the right collaborator at runtime, without hardcoded routing logic.

This is the closest analog to how MCP exposes tool capabilities, applied to agents instead.

Part 4: How they fit together

This is where the picture clicks into place.

MCP handles the connection between an agent and its tools.
A2A handles the connection between agents.

When you build a real multi-agent system, an agent typically does both:

  • It is an MCP host — it has MCP clients connected to MCP servers, giving it access to tools and data
  • It also speaks A2A — it can discover, authenticate, and collaborate with other agents

In other words: an agent uses MCP to look outward at its tools, and A2A to look sideways at its peers.

Google made this explicit at Cloud Next 2026: A2A is designed to complement MCP, not compete with it. Google itself adopted MCP across its services in December 2025, launching managed remote MCP servers for Google Maps, BigQuery, Compute Engine, Kubernetes Engine, Cloud Run, Cloud Storage, AlloyDB, Cloud SQL, Spanner, Looker, and Pub/Sub.

Anthropic and Google now both ship infrastructure that assumes their agents will speak each other’s protocols. That’s a strong signal.

Part 5: Why this matters for founders right now

If you’re a startup founder or CTO building AI features into a product, here’s the practical takeaway:

1. Stop writing custom tool integrations. Every tool integration you write today that doesn’t go through MCP is technical debt you’ll pay back in 12 months. Build one MCP server for your core data and capabilities. Let any agent — yours, your customers’, your partners’ — connect through it.

2. Plan for multi-agent before you need it. Most AI products start as single-agent chatbots. The ones that win usually evolve into multi-agent workflows: a planner, a researcher, a writer, an executor. Designing your architecture around A2A from the start is much cheaper than retrofitting it after you have three agents in production.

3. The data security story changes. When your AI features depend on third-party API calls to OpenAI or Anthropic, every prompt is a data leak risk. When your agents use MCP to access your own data layer — whether on your infrastructure or behind a tightly-scoped MCP server — you control the exposure surface. This is the foundation of any serious “deploy on your own infrastructure” strategy.

4. Vendor lock-in becomes a choice, not a default. Both MCP and A2A are open, governed by the Linux Foundation, and supported by every major model vendor. If you build on these protocols, switching from Claude to GPT-5 to a self-hosted Llama model becomes a configuration change, not a rewrite.

Part 6: What this doesn’t fix

Honest practitioner note: protocols are not a silver bullet.

  • Security is still your problem. A 2025 scan of 1,808 MCP servers found 66% had security findings. Open protocols mean an open attack surface. Sandboxing, scoping, and auditing your MCP servers is non-negotiable.
  • Latency adds up. Each protocol hop costs milliseconds. A multi-agent workflow with five A2A hops and twelve MCP tool calls is going to feel slow if you’re not careful.
  • Debugging is harder. Distributed systems problems are now agent problems. Tracing, observability, and structured logging across agent boundaries become essential, not optional.

These are tractable problems. But they’re real, and the protocols don’t solve them for you.

The lesson

Most teams are still writing custom integrations for problems that already have open protocols.

MCP and A2A aren’t hype. They’re the plumbing. They’re what the next decade of AI infrastructure is going to be built on, and the companies that figure this out now will be running circles around the ones still gluing OpenAPI specs to LangChain wrappers in 2027.

If you’re building anything in the agent space — even a simple chatbot today that you suspect will grow into something bigger — pull up the MCP spec, read the A2A docs, and start asking yourself a different question:

Not “what model should I use?” — but “what does my agent stack look like in two years, and is it built on protocols I control?”

Most teams haven’t asked that question yet.

The ones who do are going to have a real moat.

Sources

  • AgentSeal — We Scanned 1,808 MCP Servers. 66% Had Security Findings, 2025
  • Anthropic — Donating the Model Context Protocol to the Linux Foundation, December 2025
  • Google Developers Blog — Announcing the Agent2Agent Protocol (A2A), April 9, 2025
  • Linux Foundation — Launches the Agent2Agent Protocol Project, June 23, 2025
  • Google Cloud Blog — Agent2Agent Protocol v0.3 release, July 31, 2025
  • IBM Think — What Is Agent2Agent (A2A) Protocol?, 2025
  • The Next Web — Google Cloud Next 2026: A2A protocol at 150 orgs, 2026
  • Model Context Protocol Registry — registry.modelcontextprotocol.io

Written by a practitioner, not a vendor. If you’re shipping AI in production and hitting these walls, the protocols are real and they’re working today.

— deploy.real