MCP Servers and A2A Protocol

MCP (Model Context Protocol)

Definition

Components

MCP's architecture is divided into three main parts: the client, the server, and the protocol itself. These components work together to create a seamless, standardized pipeline for AI-tool interactions.

How MCP Works

  1. Setup: A developer deploys an MCP server (e.g., for Google Calendar) and connects it to a client (e.g., Claude Desktop). The client discovers the server's tools by querying an endpoint like /tools, which returns JSON schemas describing available functions (e.g., "create_event" with parameters like "start_time").
  2. Interaction: The user prompts the AI (e.g., "Schedule a meeting at 3 PM"). The client passes this to the LLM, which generates a tool call based on the schema. The client sends a POST request to the server (e.g., /execute) with the parameters.
  3. Execution: The server authenticates the request, executes the action (e.g., creates the calendar event), and returns a JSON response with results or errors.
  4. Response: The client injects the response into the LLM's context, allowing the AI to refine or confirm the output to the user. This flow ensures secure, efficient, and standardized interactions, with features like rate limiting and encryption for enterprise use.

Python Code Demonstration: MCP Client Calling MCP Server

Here's a simplified Python example adapted from GitHub repos like RGGH/mcp-client-x and jlowin/fastmcp, demonstrating a basic MCP client (using OpenAI's API for the LLM) calling a custom MCP server (e.g., a TODO list manager). This uses FastAPI for the server and requests for the client. In practice, you'd run the server separately and connect via URL.

Server Code (mcp_server.py): Run this first to start the server.

from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
import uvicorn

app = FastAPI(title="Simple MCP TODO Server")

# Sample data store
todos = []

class TodoItem(BaseModel):
    task: str

# MCP Tool Schema Endpoint
@app.get("/tools")
def get_tools():
    return {
        "tools": [
            {
                "name": "add_todo",
                "description": "Add a new TODO item",
                "parameters": {"task": {"type": "string", "description": "The task description"}}
            }
        ]
    }

# Execute Endpoint
@app.post("/execute")
def execute(request: dict):
    tool_name = request.get("tool_name")
    params = request.get("parameters")

    if tool_name == "add_todo":
        if "task" not in params:
            raise HTTPException(status_code=400, detail="Missing 'task' parameter")
        todos.append(params["task"])
        return {"result": f"Added TODO: {params['task']}", "todos": todos}
    raise HTTPException(status_code=404, detail="Tool not found")

if __name__ == "__main__":
    uvicorn.run(app, host="0.0.0.0", port=8000)

Client Code (mcp_client.py): This simulates an AI client querying the server.

import requests
import openai  # Assuming OpenAI for LLM; replace with Anthropic/Claude if needed

# Server URL
SERVER_URL = "http://localhost:8000"

# Discover tools
tools_response = requests.get(f"{SERVER_URL}/tools")
tools = tools_response.json()["tools"]
print("Available Tools:", tools)

# Simulate LLM prompt to decide tool call (in real use, integrate with LLM)
prompt = "Add a TODO: Buy groceries"
# Here, mock LLM output: decide to call 'add_todo' with params
tool_call = {"tool_name": "add_todo", "parameters": {"task": "Buy groceries"}}

# Call server
execute_response = requests.post(f"{SERVER_URL}/execute", json=tool_call)
if execute_response.status_code == 200:
    result = execute_response.json()
    print("Server Response:", result)
else:
    print("Error:", execute_response.json())

How to Run: 1. Install dependencies: pip install fastapi uvicorn pydantic requests openai. 2. Run server: python mcp_server.py. 3. Run client: python mcp_client.py. This demonstrates discovery (/tools), calling (/execute), and response handling. In advanced setups, integrate with an LLM library like LangChain for dynamic tool selection.

Advantages

Disadvantages

Use Cases

Key Differences Between MCP and A2A