MCP Servers: how AI agents communicate with your website
What is the Model Context Protocol?
The Model Context Protocol, MCP for short, is an open standard that defines how AI agents communicate with external systems. Where traditional APIs were designed for human-to-machine interaction, MCP is specifically built for machine-to-machine communication with AI agents as the user. The protocol was developed by Anthropic and has since been widely adopted across the AI industry.
MCP solves a fundamental problem: AI models are powerful at processing language and reasoning, but they have no direct access to the outside world. They cannot query databases, call APIs or read files unless a bridge is built between the model and those external sources. MCP is that bridge.
To understand why MCP is so important for the future of the web, it helps to first understand what AEO entails and how AI models are increasingly consuming content actively. MCP takes that interaction to the next level: from passively reading to actively collaborating.
How MCP works: clients, servers and the protocol
The architecture of MCP is based on a client-server model. An MCP client is typically an AI agent or application acting on behalf of a user. An MCP server is a service that offers specific capabilities to that agent. The protocol defines how these two parties communicate with each other via a standardized JSON-RPC format.
- The MCP client (AI agent) discovers which servers are available and what capabilities they offer.
- The MCP server publishes a server card with metadata about available tools, resources and prompts.
- Communication happens via JSON-RPC 2.0 over HTTP or Server-Sent Events (SSE).
- The protocol supports authentication via OAuth 2.0 for secure access.
- Servers can offer both stateless tools (like a calculator) and stateful resources (like database queries).
The difference between MCP and a traditional REST API
A common question is: "What does MCP add that a REST API doesn't already provide?" The answer lies in the discovery layer and semantic descriptions. With a REST API, a developer must manually read documentation, integrate endpoints and configure authentication. MCP standardizes this entire process so an AI agent can complete it autonomously.
// Traditional REST API: developer-driven
GET /api/v1/products?category=laptops
Headers: Authorization: Bearer
// MCP: agent-driven (discovery + semantic description)
1. Agent reads server card → discovers tools and capabilities
2. Agent selects tool "search_products" based on goal
3. Agent authenticates via OAuth discovery (automatic)
4. Agent calls tool with structured parameters
// The difference: the agent understands WHAT the tool does,
// not just HOW to call it. Dive deeper: OAuth discovery for AI agents | A2A protocol: agent-to-agent communication | Robots.txt for AI crawlers
The MCP server card: your digital business card for AI
The heart of an MCP server is the server card, a JSON document that describes what the server can do. This document tells AI agents exactly which tools are available, which resources they can access and which prompts they can use. Below is an example of what such a server card looks like.
{
"name": "aeo-scanner",
"version": "1.0.0",
"description": "AEO Scanner API for website analysis",
"capabilities": {
"tools": true,
"resources": true,
"prompts": true
},
"tools": [
{
"name": "scan_url",
"description": "Scan a URL for AEO readiness and return scores",
"inputSchema": {
"type": "object",
"properties": {
"url": {
"type": "string",
"description": "The URL to scan"
}
},
"required": ["url"]
}
},
{
"name": "get_recommendations",
"description": "Get improvement recommendations for a previously scanned URL",
"inputSchema": {
"type": "object",
"properties": {
"scan_id": {
"type": "string",
"description": "The ID of a previous scan"
}
},
"required": ["scan_id"]
}
}
],
"resources": [
{
"uri": "aeo://scans/latest",
"name": "Latest scan results",
"description": "The most recent scan results for the account",
"mimeType": "application/json"
}
]
}Tools versus resources
MCP makes an important distinction between tools and resources. Tools are active operations that the agent can perform, similar to functions. Resources are passive data sources that the agent can read. A tool might start a scan, for example, while a resource makes the results of previous scans available.
Prompts: pre-built instructions
In addition to tools and resources, MCP also supports prompts. These are predefined instruction sets that the agent can use as a starting point for a task. Think of a prompt "analysis_report" that helps the agent summarize scan results into a clear overview. Prompts lower the barrier for agents to correctly perform complex tasks with your server.
Why MCP is relevant for your website
The rise of AI agents that independently perform tasks for users is no longer a future concept. Tools like Claude with computer use, OpenAI's browsing agent and Google's Project Mariner are the precursors to a world where AI agents visit websites, compare products and execute transactions on behalf of consumers.
Websites that offer their services to AI agents via MCP gain a competitive advantage. Instead of an agent having to scrape your website and interpret HTML, you offer a structured, reliable interface that the agent can use directly. This is faster, more accurate and more scalable.
Practical examples by industry
- E-commerce: an AI agent can search for products via MCP, check availability and place an order on behalf of the user, without having to interpret your webshop interface.
- SaaS platforms: you can make dashboard data, user statistics and reports available via MCP. An agent can then generate a weekly report without opening your UI.
- Knowledge bases and documentation: MCP makes it possible to make specific articles, FAQ answers and manuals directly searchable for AI agents.
- Travel agencies and booking platforms: agents can search flights, compare prices and make bookings via standardized MCP tools.
MCP is still a relatively young standard, but adoption is accelerating. By implementing an MCP server now, or at least preparing the architecture, you position your website as a frontrunner in the agent economy.
Implementing MCP: first steps
Implementing an MCP server does not have to be complex. Start by identifying the core functionality you want to offer to AI agents. For most websites, these are the information retrieval functions: product data, articles, prices and availability.
- Identify the core functions of your website that are valuable for AI agents.
- Define tools for active operations (searching, ordering) and resources for passive data (catalogs, price lists).
- Implement the MCP server with JSON-RPC 2.0 as the transport protocol.
- Add OAuth 2.0 authentication for secured endpoints.
- Publish your server card at a discoverable location so AI agents can find your server.
- Test your implementation with existing MCP clients such as the Claude desktop app.
A minimal MCP server in Node.js
To make the concept tangible, here is a simplified example of an MCP server. This illustrates the core principles without production complexity.
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
const server = new McpServer({
name: "my-website",
version: "1.0.0",
});
// Define a tool
server.tool(
"search_articles",
"Search articles on the website based on a search term",
{ query: z.string(), limit: z.number().optional().default(10) },
async ({ query, limit }) => {
const results = await searchDatabase(query, limit);
return {
content: [{
type: "text",
text: JSON.stringify(results, null, 2),
}],
};
}
);
// Start the server
const transport = new StdioServerTransport();
await server.connect(transport);Securing your MCP server is a topic in its own right. Read more about how to set this up securely in our article on OAuth discovery for AI agents. And don't forget to update your llms.txt file with a reference to your MCP server, so AI models can also discover your capabilities through that route.
MCP and the broader agent infrastructure
MCP does not stand alone. It is part of a growing ecosystem of protocols and standards that together shape the agent economy. While MCP handles communication between an agent and your server, the A2A protocol handles communication between agents themselves. And robots.txt determines which content agents are allowed to access in the first place.
The future: a web of AI agents
MCP is part of a broader movement towards a web that is used not only by people, but also by AI agents. Together with standards such as OAuth discovery for agents, Web Bot Auth and robots.txt AI rules, MCP forms the protocols that enable the agent economy. Websites that adopt these standards become the preferred partners of AI agents.
The parallel with the early days of the web is striking. Just as websites needed to be optimized for search engines to be discoverable, websites now need to be optimized for AI agents to be usable. MCP is the HTTP of the agent world: the protocol that connects everything.
Key takeaways
- MCP is an open standard from Anthropic that defines how AI agents communicate with external systems via a standardized JSON-RPC protocol.
- The server card is the heart of MCP: a machine-readable document that describes tools, resources and prompts so agents can autonomously discover what your website has to offer.
- The distinction between tools (active operations) and resources (passive data sources) determines how agents interact with your server.
- MCP provides a structured alternative to web scraping, enabling agents to use your services faster, more accurately and more scalably.
- By implementing or preparing for MCP now, you position your website as a frontrunner in the emerging agent economy.
Frequently asked questions
Does MCP replace my existing REST API?
No, MCP does not replace your REST API. It is an additional layer that makes your existing functionality accessible to AI agents. You can build MCP tools that internally call your REST API. The added value lies in the standardized discovery and semantic descriptions that agents need to operate autonomously.
How do AI agents find my MCP server?
There are multiple discovery mechanisms. The most common is publishing your server card at a fixed URL, similar to how robots.txt and .well-known endpoints work. Additionally, you can register your MCP server with MCP directories and mention it in your llms.txt file. As the ecosystem grows, more standardized discovery methods will emerge.
Is MCP secure? Can an agent perform unwanted actions?
MCP contains multiple security layers. First, the server card defines exactly which operations are available. Second, OAuth 2.0 authentication ensures agents only get access to what they have permission for. Third, the user can determine exactly which scopes an agent may use via the OAuth consent flow. You therefore always maintain control over what an agent can do.
Should I build an MCP server right now?
That depends on your situation. If you offer an API or your website contains data that is valuable for AI agents (products, documentation, services), it is smart to start now. You can start small with a few read-only tools and expand later. If your website is primarily informational, focus first on the basics: a good llms.txt file, correct robots.txt configuration and structured data.
Which AI agents support MCP today?
The Claude desktop app and Claude Code from Anthropic were the first widely available MCP clients. Since then, support has been added by Cursor, Windsurf, Cline and various other AI tools. OpenAI has announced plans to support MCP. The expectation is that all major AI platforms will be MCP-compatible in the foreseeable future.
MCP does for AI agents what HTTP did for web browsers: it creates a universal language for communication between machines and services.
How does your website score on AI readiness?
Get your AEO score within 30 seconds and discover what you can improve.