A2A Protocol: the future of agent-to-agent communication

AN
Admin AEO Expert
A2A Protocol: the future of agent-to-agent communication — AI & Agents

What is the A2A Protocol?

The Agent-to-Agent (A2A) protocol is an open standard initiated by Google and supported by a growing coalition of technology companies. The protocol defines a standardized way for AI agents to communicate with each other, delegate tasks and exchange results. Think of A2A as the HTTP of the AI agent world: a universal language that enables agents from different providers to collaborate seamlessly. This is directly related to the Model Context Protocol (MCP), which defines the interaction between a model and local tools.

Why is this relevant for website owners? Because the web is increasingly visited not only by humans but also by autonomous AI agents that perform tasks on behalf of users. An AI agent booking a flight might compare information from dozens of websites. The A2A protocol determines how these agents identify themselves, what they can do and how they exchange information.

CONTEXT

A2A is complementary to Anthropic's Model Context Protocol (MCP). Where MCP defines the interaction between an AI model and local tools/data, A2A focuses on communication between two or more remote AI agents.

The Agent Card: your digital business card

The heart of the A2A protocol is the Agent Card: a JSON document that describes who an agent is, what it can do and how to communicate with it. Every A2A-compatible agent publishes an Agent Card at a predictable location, similar to how robots.txt works for web crawlers.

The Agent Card contains metadata about the agent (name, description, provider), a list of capabilities (which tasks the agent can perform), skills (specific abilities) and endpoint definitions (how to reach the agent). This discovery mechanism is similar to how robots.txt and llms.txt work for crawlers and language models respectively.

// Example Agent Card (/.well-known/agent.json)
{
  "name": "Kobalt AEO Scanner Agent",
  "description": "Analyzes websites for AI-readiness and generates optimization reports",
  "url": "https://aeo-expert.nl/api/a2a",
  "version": "1.0.0",
  "provider": {
    "organization": "Kobalt Digital",
    "url": "https://www.kobaltdigital.nl"
  },
  "capabilities": {
    "streaming": true,
    "pushNotifications": false,
    "stateTransitionHistory": true
  },
  "authentication": {
    "schemes": ["Bearer"]
  },
  "defaultInputModes": ["text/plain", "application/json"],
  "defaultOutputModes": ["application/json", "text/html"],
  "skills": [
    {
      "id": "scan-website",
      "name": "Website AEO Scan",
      "description": "Performs a full AEO and Agent-Readiness scan on a URL",
      "tags": ["aeo", "seo", "ai-readiness", "website-analysis"],
      "examples": [
        "Scan https://example.com for AI-readiness",
        "Generate an AEO report for my website"
      ]
    },
    {
      "id": "check-schema",
      "name": "Schema.org Validator",
      "description": "Checks and validates Schema.org markup on a page",
      "tags": ["schema-org", "structured-data", "validation"],
      "examples": [
        "Validate the Schema.org markup on https://example.com"
      ]
    }
  ]
}

How A2A communication works

The A2A protocol defines a standardized communication flow consisting of three phases.

Phase 1: Discovery

A client agent discovers available agents by fetching their Agent Cards. The cards are published at /.well-known/agent.json, a convention similar to /.well-known/openid-configuration for OAuth. The client agent reads the skills and capabilities to determine which agent is suitable for the requested task.

Phase 2: Task Execution

Once a suitable agent is found, the client agent sends a task request to the server agent's endpoint. The A2A protocol uses a JSON-RPC like message format with support for streaming, allowing long-running tasks (such as a comprehensive website scan) to send intermediate updates.

Tasks go through defined statuses: submitted, working, input-required, completed and failed. This state machine ensures that both agents always know what phase the task is in, even during network interruptions.

Phase 3: Response

After task completion, the server agent sends the result back in one of the configured output modes. This can be structured JSON, but also HTML or plain text. The client agent processes the result and presents it to the end user or uses it as input for a next step.

Practical example: a complete A2A flow

Let us walk through a concrete example. A user asks a personal AI assistant to scan their website for AI-readiness. The assistant (client agent) finds a suitable scanner agent via discovery, sends a task request, and receives the scan report.

// Step 1: Client agent fetches Agent Card
GET https://aeo-expert.nl/.well-known/agent.json

// Step 2: Client agent sends task request
POST https://aeo-expert.nl/api/a2a
Content-Type: application/json
Authorization: Bearer 

{
  "jsonrpc": "2.0",
  "method": "tasks/send",
  "params": {
    "id": "task-abc-123",
    "message": {
      "role": "user",
      "parts": [
        {
          "type": "text",
          "text": "Scan https://example.com for AI-readiness"
        }
      ]
    }
  }
}

// Step 3: Server agent sends status updates (via streaming)
{"jsonrpc": "2.0", "result": {"id": "task-abc-123", "status": {"state": "working", "message": "Scanning..."}}}
{"jsonrpc": "2.0", "result": {"id": "task-abc-123", "status": {"state": "working", "message": "Schema.org analysis..."}}}

// Step 4: Server agent sends final result
{"jsonrpc": "2.0", "result": {"id": "task-abc-123", "status": {"state": "completed"}, "artifacts": [{"parts": [{"type": "text", "text": "AEO Score: 72/100..."}]}]}}

The impact on websites and content

The rise of agent-to-agent communication has direct consequences for how websites are built and optimized. AI agents are not passive content consumers: they actively perform tasks and need structured, machine-readable information.

  • Structured APIs become more important than ever. Agents prefer to communicate via JSON and defined endpoints, not by scraping HTML.
  • Machine-readable metadata (Schema.org, OpenAPI specs, Agent Cards) becomes the standard way agents discover your services.
  • Authentication and authorization must be agent-compatible. OAuth 2.0 with scoped tokens is the minimum.
  • Content must not only be readable for humans, but also interpretable for agents acting on behalf of humans.
  • Rate limiting and abuse protection must account for legitimate agent traffic alongside human traffic.

A2A versus MCP: when to use which?

A frequently asked question is how A2A relates to the Model Context Protocol (MCP). The short answer: they are complementary and complete each other. MCP is designed for the interaction between an AI model and local tools or data sources. A2A is designed for communication between two or more remote agents over the network.

// Comparison A2A vs MCP

// A2A: Remote agent-to-agent communication
// - Agent discovers other agents via /.well-known/agent.json
// - Tasks are delegated via JSON-RPC over HTTPS
// - Supports streaming and long-running tasks
// - Focus: multi-agent orchestration

// MCP: Local model-to-tool communication
// - Model discovers tools via capabilities handshake
// - Tools are called locally via standardized interface
// - Focus: extending model capabilities with external tools

// Practical scenario: both protocols together
// 1. User asks AI assistant for website analysis
// 2. Assistant uses MCP to call local tools
//    (e.g., browser tool for screenshot)
// 3. Assistant uses A2A to call remote scanner agent
//    for in-depth analysis
// 4. Results are combined and presented

Preparing your website for A2A

Although the A2A protocol is still in development, you can already take steps to prepare your website for a world where AI agents play an increasingly important role.

  1. Implement robust Schema.org markup on all your pages. This is the foundation on which agents discover and understand your content.
  2. Offer an API for your key functionality. If you provide a service that agents can use, make it available through a documented REST or GraphQL API.
  3. Publish an OpenAPI specification for your API. This enables agents to automatically understand which endpoints are available and which parameters they accept.
  4. Consider publishing an Agent Card if you offer services relevant to AI agents. Even if the protocol is not yet widely adopted, you position yourself as an early adopter.
  5. Ensure proper authentication and authorization. Implement OAuth 2.0 with clear scopes so agents can securely act on behalf of users.
  6. Monitor your server logs for agent traffic. Start recognizing patterns in how AI agents approach your site and optimize accordingly.

Publishing a minimal Agent Card

You do not need to build a full A2A endpoint to get started. A minimal Agent Card at /.well-known/agent.json already makes you discoverable for agents scanning the web for available services.

// Minimal Agent Card for an informational website
// Publish at: /.well-known/agent.json
{
  "name": "Your Company",
  "description": "Information about [your services/products]",
  "url": "https://yourdomain.com",
  "version": "0.1.0",
  "provider": {
    "organization": "Your Company Ltd.",
    "url": "https://yourdomain.com"
  },
  "capabilities": {
    "streaming": false,
    "pushNotifications": false
  },
  "skills": [
    {
      "id": "company-info",
      "name": "Company Information",
      "description": "Provides information about our services and expertise",
      "tags": ["info", "services"]
    }
  ]
}

A2A in the broader protocol landscape

The A2A protocol does not stand alone. It is part of a larger ecosystem of standards that together shape the future of AI on the web. Anthropic's Model Context Protocol (MCP) defines how AI models use local tools and data sources. OAuth discovery defines authentication for agents. The llms.txt standard provides instructions to AI crawlers. And robots.txt remains the fundamental layer for crawl management.

Together these protocols form the infrastructure of the agentic web: a web where not only humans but also AI agents are active participants. For website owners the message is clear: those who invest early in these protocols build an advantage that will be hard to catch up on when agent-to-agent communication goes mainstream.

Want an overview of how all these protocols together determine your AI visibility? Start with our introduction to AEO which describes the complete strategy.

Key takeaways

  • A2A is an open standard from Google that defines how AI agents communicate, delegate tasks and exchange results.
  • The Agent Card (/.well-known/agent.json) is the discovery mechanism through which agents discover each other's capabilities.
  • A2A and MCP are complementary: MCP for local model-tool interaction, A2A for remote agent-agent communication.
  • You can start now by implementing robust Schema.org markup, documenting an API and preparing OAuth 2.0.
  • Publishing a minimal Agent Card positions you as an early adopter and makes you discoverable for AI agents.

Frequently asked questions

Should I publish an Agent Card for my website now?

That depends on your situation. If you offer an API or service that is meaningful for AI agents (think booking systems, data APIs, analysis tools), then an Agent Card is a smart investment. For informational websites or blogs it is not yet necessary, but a minimal Agent Card costs little effort and positions you as a forward-thinking organization. A2A adoption is growing rapidly; those who join early benefit the most.

What is the difference between A2A and MCP in practice?

MCP (Model Context Protocol) is designed for the interaction between an AI model and local tools or data sources, such as a browser extension or a database connector. A2A is designed for communication over the network, between two independent AI agents. In practice they work together: an AI assistant can use MCP to call local tools, and A2A to query external agents. They do not compete, they complement each other.

Is A2A secure? How do I prevent abuse?

A2A requires authentication (by default via OAuth 2.0 Bearer tokens) and specifies clear capability boundaries via the Agent Card. You determine which skills you offer and what authentication you require. Rate limiting, IP whitelisting and scoped tokens are additional security measures. The protocol is designed with security-by-design, but the implementation of that security is your own responsibility.

Which companies support the A2A protocol?

A2A was initiated by Google and is supported by a growing coalition including Salesforce, SAP, Atlassian, MongoDB and other technology companies. The standard is open and anyone can build an A2A-compatible agent. The broad industry support suggests that A2A will become one of the standards for inter-agent communication, alongside MCP for model-tool interaction.

How does A2A relate to llms.txt and robots.txt?

Each protocol addresses a different layer. robots.txt controls which bots may crawl your site. llms.txt describes your content specifically for language models. A2A defines how AI agents can interact with your services. They are complementary: robots.txt and llms.txt are passive (they wait for a bot to visit), while A2A enables active, bidirectional communication. A complete AI strategy implements all three.

The web is evolving from a network for humans to a network for humans and agents. A2A is the language agents need to collaborate. Those who speak that language are ready for the future.

How does your website score on AI readiness?

Get your AEO score within 30 seconds and discover what you can improve.

Free scan

SHARE THIS ARTICLE

LINKEDIN X

RELATED ARTICLES