What Is the Model Context Protocol?
The Model Context Protocol (MCP) is an open standard created by Anthropic that defines how AI models communicate with external tools, data sources, and services. Think of it as the USB-C of the AI world: a universal interface that lets any AI model plug into any service through a standardized protocol. Before MCP, every AI tool built its own proprietary integrations. MCP changes that by providing a single, well-defined protocol that both AI clients and tool servers can implement.
MCP follows a client-server architecture. The AI application (Claude Desktop, Cursor, Claude Code) acts as the MCP client. External services expose their capabilities through MCP servers โ lightweight processes that translate the MCP protocol into specific API calls, database queries, or system operations. When the AI needs to read a file from Google Drive, query a PostgreSQL database, or post a message to Slack, it communicates through the MCP protocol, and the corresponding MCP server handles the actual interaction.
MCP servers run locally on your machine by default. Your data flows directly from the MCP server to the service โ it does not pass through any third-party servers. This is a critical security property that makes MCP suitable for enterprise environments.
Understanding the MCP Architecture
An MCP server exposes three types of primitives to AI clients. Tools are executable functions โ they let the AI take actions like 'create a GitHub issue' or 'run a SQL query.' Resources are data sources โ they let the AI read information like 'the contents of this Google Doc' or 'the schema of this database.' Prompts are reusable templates that guide the AI's behavior in specific contexts. Most automation workflows rely heavily on tools and resources.
- Tools: Executable functions the AI can call. Each tool has a name, description, and JSON Schema defining its parameters. Example: a 'search_emails' tool that accepts a query string and returns matching emails.
- Resources: Read-only data sources identified by URIs. The AI can browse available resources and read their contents. Example: 'postgres://mydb/users' exposing the users table schema and sample data.
- Prompts: Pre-defined instruction templates that help the AI understand how to use a specific server effectively. Example: a prompt template for 'analyze quarterly sales data' that structures the AI's approach.
Setting Up Your First MCP Server
Let us walk through setting up a real MCP server from scratch. We will build a server that connects to a PostgreSQL database, allowing your AI assistant to query data, inspect schemas, and generate reports โ all through natural language. This is one of the most powerful and practical MCP use cases.
# Install the MCP SDK
npm init -y
npm install @modelcontextprotocol/sdk pg
npm install -D typescript @types/node @types/pg
# Create the project structure
mkdir src
touch src/index.ts
Now let us write the MCP server. The SDK provides a Server class that handles all the protocol details. You just need to define your tools and resources.
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { Pool } from "pg";
const pool = new Pool({
connectionString: process.env.DATABASE_URL,
});
const server = new Server(
{ name: "postgres-mcp", version: "1.0.0" },
{ capabilities: { tools: {}, resources: {} } }
);
// Define a tool to run read-only SQL queries
server.setRequestHandler("tools/list", async () => ({
tools: [
{
name: "query",
description: "Run a read-only SQL query against the database",
inputSchema: {
type: "object",
properties: {
sql: { type: "string", description: "The SQL query to execute" },
},
required: ["sql"],
},
},
],
}));
server.setRequestHandler("tools/call", async (request) => {
if (request.params.name === "query") {
const sql = request.params.arguments?.sql as string;
// Safety: only allow SELECT statements
if (!sql.trim().toUpperCase().startsWith("SELECT")) {
return { content: [{ type: "text", text: "Error: Only SELECT queries are allowed." }] };
}
const result = await pool.query(sql);
return { content: [{ type: "text", text: JSON.stringify(result.rows, null, 2) }] };
}
throw new Error("Unknown tool");
});
// Start the server
const transport = new StdioServerTransport();
await server.connect(transport);
Connecting MCP Servers to Your AI Client
Once your MCP server is built, you need to register it with your AI client. The configuration varies by client, but the principle is the same: you tell the client where to find the server and how to start it.
// Claude Desktop: ~/Library/Application Support/Claude/claude_desktop_config.json
// Claude Code: .claude/settings.json in your project
{
"mcpServers": {
"postgres": {
"command": "npx",
"args": ["tsx", "/path/to/postgres-mcp/src/index.ts"],
"env": {
"DATABASE_URL": "postgresql://user:pass@localhost:5432/mydb"
}
}
}
}
After adding the configuration, restart your AI client. You should now be able to ask natural language questions about your database: 'Show me the top 10 customers by revenue this quarter' or 'What tables exist in the database and how are they related?' The AI will use the MCP tools to query the database and present the results.
Real-World MCP Automation Examples
The database example above is just the beginning. MCP servers can connect to virtually any service. Here are five production-ready automation patterns we see teams deploying today.
- GitHub + Linear Integration: An MCP server that reads GitHub PRs, extracts the changes, creates Linear tickets for follow-up work, and posts summaries back to the PR. Teams report saving 3-5 hours per week on project management overhead.
- Slack + Analytics Pipeline: An MCP server that monitors Slack channels for data requests, queries your analytics warehouse, generates charts, and posts them back to the channel. No more waiting for the data team to run ad-hoc reports.
- CRM Data Assistant: Connect to Salesforce or HubSpot via MCP, letting sales teams ask natural language questions about pipeline data, generate forecasts, and draft follow-up emails โ all through their AI assistant.
- Infrastructure Monitoring: An MCP server connected to Datadog or Grafana that lets on-call engineers diagnose issues by asking questions like 'What changed in the payment service in the last 2 hours?' instead of manually navigating dashboards.
- Content Publishing Pipeline: An MCP server that connects to your CMS (WordPress, Sanity, Contentful), letting writers draft content in AI, review it, and publish directly โ including image optimization and SEO metadata generation.
Tip: Start with read-only MCP servers. Get comfortable with AI querying your data before you give it write access. You can always add mutation tools later once you have established trust in the workflow.
Security Best Practices for MCP Servers
Security is the most important consideration when building MCP servers, especially those that connect to production systems. The fundamental principle is least privilege: every MCP server should have the minimum permissions required for its specific use case. A server that generates analytics reports should have read-only database access. A server that creates GitHub issues should not have permission to merge PRs or modify repository settings.
- Use read-only credentials wherever possible. Create dedicated database users with SELECT-only permissions for analytics MCP servers.
- Implement input validation in every tool handler. Never pass user input directly to shell commands or SQL queries without sanitization.
- Set rate limits on MCP tool calls to prevent runaway automation. A bug in your prompting should not result in 10,000 API calls.
- Log every tool invocation with the full parameters. You need an audit trail for debugging and compliance.
- Use environment variables for all secrets. Never hardcode API keys, database passwords, or tokens in your MCP server code.
- Run MCP servers in containers or sandboxed environments in production to limit blast radius if something goes wrong.
Warning: Be extremely careful with MCP servers that have write access to production systems. A misinterpreted natural language instruction could result in data modification or deletion. Always implement confirmation steps for destructive operations.
Building Chained Workflows
The real power of MCP emerges when you chain multiple servers together. A single AI conversation can span multiple services: read data from your database, generate a report, save it to Google Drive, and send a Slack notification โ all in one interaction. The AI orchestrates the workflow by calling tools from different MCP servers in sequence.
To build effective chained workflows, configure multiple MCP servers simultaneously in your client. Give each server a clear, descriptive name so the AI understands which server to use for each step. Write clear tool descriptions that explain not just what each tool does, but when it should be used. The AI uses these descriptions to plan its multi-step workflows.
// Multi-server configuration for a chained workflow
{
"mcpServers": {
"analytics-db": {
"command": "npx",
"args": ["tsx", "./mcp/analytics-db.ts"],
"env": { "DATABASE_URL": "postgresql://readonly@analytics:5432/warehouse" }
},
"google-drive": {
"command": "npx",
"args": ["tsx", "./mcp/google-drive.ts"],
"env": { "GOOGLE_CREDENTIALS_PATH": "./credentials.json" }
},
"slack": {
"command": "npx",
"args": ["tsx", "./mcp/slack.ts"],
"env": { "SLACK_BOT_TOKEN": "xoxb-your-token" }
}
}
}
Debugging MCP Servers
Debugging MCP servers can be tricky because they communicate over stdio, making traditional console.log debugging difficult. The MCP SDK provides an Inspector tool that lets you test your servers interactively. Run your server with the inspector to see exactly what messages are being exchanged between client and server.
# Use the MCP Inspector to debug your server
npx @modelcontextprotocol/inspector npx tsx ./src/index.ts
# This opens a web interface where you can:
# - List available tools and resources
# - Call tools with custom parameters
# - See the raw JSON-RPC messages
# - Verify your server's responses
Common debugging issues include: tools not appearing in the client (usually a naming or registration issue), tool calls returning errors (check your input validation and error handling), and timeout errors (MCP has default timeouts that you may need to increase for long-running operations like database queries on large tables).
What Is Next for MCP
MCP is still a young protocol, and the ecosystem is growing rapidly. The community has already published hundreds of open-source MCP servers covering everything from file systems to email to enterprise SaaS platforms. As more AI clients adopt MCP, the value of each server increases because it works across all compatible clients โ build once, use everywhere.
Upcoming developments to watch include remote MCP servers (currently most run locally, but hosted servers are coming), OAuth-based authentication for multi-user deployments, and the MCP registry โ a package manager-like system for discovering and installing MCP servers. The protocol is also being extended to support streaming responses and bidirectional communication, enabling more sophisticated real-time workflows.
Enjoyed this guide?
Get more like it delivered to your inbox every week. No spam, unsubscribe anytime.