MCP Server Tutorial: Build Your First Server in 2026
MCP Server Tutorial: Build Your First Server in 2026
If you've been building with AI tools lately, you've probably heard about the Model Context Protocol. MCP is quickly becoming the standard way to connect LLMs to external tools and data sources, and knowing how to build an MCP server is a skill every developer should pick up. In this MCP server tutorial, I'll walk you through everything from understanding the protocol to shipping a production-ready server with TypeScript.
I've built several MCP servers over the past year โ for database access, API integrations, and internal tooling โ and I want to share what actually works, what doesn't, and the mistakes that cost me hours of debugging.
What is Model Context Protocol?
The Model Context Protocol (MCP) is an open standard, originally created by Anthropic and now hosted by the Linux Foundation, that defines how AI applications communicate with external tools and data sources. Think of it as a universal plug system: instead of building custom integrations for every AI platform, you build one MCP server and any compatible client can use it.
The architecture has three key players:
- MCP Host: The application the user interacts with (Claude Desktop, VS Code, Cursor, etc.)
- MCP Client: A protocol-level connector that maintains a session with your server
- MCP Server: Your code โ a lightweight service that exposes capabilities to the AI
An MCP server can provide three types of capabilities:
- Tools โ Functions the AI can call (like querying a database or creating a file)
- Resources โ Data the AI can read (like documentation or configuration)
- Prompts โ Reusable prompt templates the AI can use
This separation keeps things clean. The AI discovers what's available, decides what to use, and calls your server through a well-defined interface.
Why MCP Matters for Developers
Before MCP, connecting an LLM to your tools meant writing brittle API wrappers, managing custom authentication flows, and dealing with different interfaces for every AI platform. MCP changes this in three important ways.
Standardization eliminates boilerplate. One protocol means one integration. Your MCP server works with Claude, ChatGPT (via the OpenAI Agents SDK), Cursor, Windsurf, and any other MCP-compatible client. Build once, connect everywhere.
Security is built into the protocol. The 2025 specification revision added OAuth 2.1 for HTTP-based transports, and the protocol enforces clear boundaries between what the AI can discover, read, and execute. You're not bolting security on after the fact.
Agent workflows become composable. When every tool speaks the same protocol, you can chain MCP servers together. Your AI agent can query a database server, pass results to an analysis server, and write findings to a documentation server โ all through the same interface.
The 2026 MCP roadmap emphasizes Streamable HTTP transport for remote servers, which means MCP servers are moving beyond local development tools into production infrastructure. This is the right time to learn.
Getting Started with MCP Servers
Let's build a practical MCP server. We'll create a server that provides development utilities โ a tool to analyze code complexity and a resource that serves project documentation. This is more useful than the typical "hello world" example and demonstrates real patterns you'll use.
Prerequisites
You need Node.js 18+ and TypeScript installed. Create your project:
mkdir mcp-devtools && cd mcp-devtools
npm init -y
npm install @modelcontextprotocol/sdk zod
npm install -D typescript @types/node
npx tsc --init
Update your tsconfig.json to target ES2022 with NodeNext module resolution, and set outDir to ./dist.
Setting Up the Server
Create src/index.ts:
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
const server = new McpServer({
name: "dev-tools",
version: "1.0.0",
capabilities: {
tools: {},
resources: {},
},
});
The McpServer class handles all the protocol details. The StdioServerTransport communicates over standard input/output, which is what local clients like Claude Desktop expect.
Adding Your First Tool
Now let's add a tool that analyzes code and returns a complexity summary:
server.tool(
"analyze-complexity",
"Analyze code complexity metrics for a given file content",
{
code: z.string().describe("The source code to analyze"),
language: z
.enum(["typescript", "javascript", "python"])
.describe("Programming language of the code"),
},
async ({ code, language }) => {
const lines = code.split("\n");
const nonEmptyLines = lines.filter((l) => l.trim().length > 0);
const functions = code.match(
/(?:function\s+\w+|(?:const|let|var)\s+\w+\s*=\s*(?:async\s+)?(?:\([^)]*\)|[^=])\s*=>|\w+\s*\([^)]*\)\s*\{)/g
);
const conditionals = code.match(/\b(?:if|else|switch|case|\?)\b/g);
const loops = code.match(/\b(?:for|while|do|\.forEach|\.map|\.filter|\.reduce)\b/g);
const cyclomaticEstimate = 1 + (conditionals?.length || 0) + (loops?.length || 0);
return {
content: [
{
type: "text" as const,
text: JSON.stringify(
{
totalLines: lines.length,
codeLines: nonEmptyLines.length,
functionCount: functions?.length || 0,
cyclomaticComplexity: cyclomaticEstimate,
avgFunctionLength: functions?.length
? Math.round(nonEmptyLines.length / functions.length)
: nonEmptyLines.length,
language,
recommendation:
cyclomaticEstimate > 10
? "Consider refactoring โ complexity is high"
: "Complexity looks manageable",
},
null,
2
),
},
],
};
}
);
Notice the key patterns here: Zod schemas for input validation, descriptive parameter docs, and structured JSON output. These aren't optional niceties โ they directly affect how well the AI uses your tool.
Adding a Resource
Resources let the AI read data without executing code. Let's add a project configuration resource:
server.resource(
"project-config",
"config://current",
async (uri) => {
const config = {
lintRules: ["no-unused-vars", "no-console", "prefer-const"],
maxComplexity: 10,
testCoverage: "80%",
conventions: {
naming: "camelCase for variables, PascalCase for types",
imports: "Group by external, internal, relative",
errors: "Always use custom error classes",
},
};
return {
contents: [
{
uri: uri.href,
mimeType: "application/json",
text: JSON.stringify(config, null, 2),
},
],
};
}
);
Connecting the Transport
Finally, wire everything up and start the server:
async function main() {
const transport = new StdioServerTransport();
await server.connect(transport);
console.error("Dev tools MCP server running on stdio");
}
main().catch((error) => {
console.error("Fatal error:", error);
process.exit(1);
});
Note: always log to stderr (via console.error), not stdout. The stdio transport uses stdout for protocol messages, so any console.log calls will corrupt the communication.
Testing with Claude Desktop
Build your project and add it to Claude Desktop's config at ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows):
{
"mcpServers": {
"dev-tools": {
"command": "node",
"args": ["/absolute/path/to/mcp-devtools/dist/index.js"]
}
}
}
Restart Claude Desktop and you should see your tools available. Ask Claude to "analyze the complexity of this code" and paste a snippet โ it will call your tool automatically.
Advanced Patterns: Building Production MCP Servers
Once you move beyond basics, here are the patterns that matter in production.
Error Handling That Helps the AI
Bad error handling is the number one reason MCP tools get ignored by the AI after a failure. Return descriptive errors with suggested fixes:
server.tool(
"query-database",
"Run a read-only SQL query against the project database",
{
query: z.string().describe("SQL SELECT query to execute"),
},
async ({ query }) => {
if (!query.trim().toUpperCase().startsWith("SELECT")) {
return {
content: [
{
type: "text" as const,
text: "Error: Only SELECT queries are allowed. Mutations must go through dedicated tools like 'insert-record' or 'update-record'.",
},
],
isError: true,
};
}
try {
const results = await db.query(query);
return {
content: [
{
type: "text" as const,
text: JSON.stringify(results.rows, null, 2),
},
],
};
} catch (err) {
return {
content: [
{
type: "text" as const,
text: `Query failed: ${err.message}. Check column names against the schema resource at config://schema.`,
},
],
isError: true,
};
}
}
);
The isError: true flag tells the client this was a failure, and the message guides the AI toward a fix. This is far better than returning a generic "Something went wrong."
Streamable HTTP for Remote Deployment
For deploying MCP servers as remote services (the direction the protocol is heading), use Streamable HTTP instead of stdio:
import express from "express";
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StreamableHTTPServerTransport } from "@modelcontextprotocol/sdk/server/streamableHttp.js";
const app = express();
app.use(express.json());
app.post("/mcp", async (req, res) => {
const transport = new StreamableHTTPServerTransport({
sessionIdGenerator: () => crypto.randomUUID(),
});
const server = createServer(); // your McpServer factory
await server.connect(transport);
await transport.handleRequest(req, res);
});
app.listen(3001, () => {
console.log("MCP server listening on http://localhost:3001/mcp");
});
This is the transport to use if you're building shared MCP servers that multiple clients connect to over the network.
Common Mistakes and How to Avoid Them
After building and debugging more MCP servers than I'd like to admit, here are the mistakes I see developers make repeatedly.
Exposing too many fine-grained tools. If your server has 20 tools that each do one tiny thing, the AI has to figure out which ones to chain together. Consolidate into outcome-oriented tools. Instead of get_user, get_orders, get_shipping_status, offer a single get_order_summary that returns everything the AI needs.
Forgetting await server.connect(transport). This one is subtle โ your server will start, the host will connect, but nothing will respond. The connection handshake never completes. Always await the connect call.
Logging to stdout. With stdio transport, console.log writes to the same stream as protocol messages. Your log output corrupts the JSON-RPC communication and causes cryptic parsing errors. Use console.error for all debug logging.
Returning raw data dumps. Sending a 500-row query result back through MCP burns tokens and confuses the AI. Paginate, summarize, or ask the AI to narrow its request. Keep responses focused and under a few KB.
Skipping input validation. Even though MCP supports Zod schemas, many developers skip detailed descriptions and constraints. The AI literally reads your schema descriptions to decide how to call your tool. Vague schemas lead to bad inputs.
Not testing with MCP Inspector. The official MCP Inspector tool (npx @modelcontextprotocol/inspector) lets you call your tools interactively without going through an AI client. Use it during development โ it's much faster than restarting Claude Desktop for every change.
FAQ
How is MCP different from function calling?
Function calling (like OpenAI's or Anthropic's tool use) is a feature of a specific AI API โ you define tools in your API request and the model decides to call them. MCP is a protocol layer that sits between any AI client and any tool server. Function calling is how the AI decides to use a tool; MCP is how that tool is discovered, described, and executed. They work together: an MCP client translates your server's tools into function-calling format for the LLM.
Can I use MCP with models other than Claude?
Yes. MCP is model-agnostic. OpenAI's Agents SDK has native MCP support, and clients like Cursor, Windsurf, and Continue work with multiple models. The protocol doesn't care which LLM is making the decisions โ it just defines how tools are exposed and called. Google Cloud also supports MCP through its Agent Development Kit.
Should I use stdio or HTTP transport?
Use stdio for local development tools and IDE integrations. It's simpler, faster, and doesn't require network configuration. Use Streamable HTTP when you need remote access, multi-client support, or you're deploying to a server. The MCP project has deprecated the older SSE transport, so avoid it for new projects. If you're just getting started, stdio is the right choice.
Conclusion
Building an MCP server tutorial is one thing โ actually building MCP servers that work well in production is another. The protocol itself is straightforward: define tools with clear schemas, return structured results, and handle errors gracefully. The hard part is designing tools that an AI can actually use effectively.
Start with the patterns in this tutorial: a focused server with a few well-described tools, proper error messages, and stdio transport for local development. Once that works, explore Streamable HTTP for remote deployment and look into composing multiple servers for complex agent workflows.
The MCP ecosystem is growing fast. The 2026 roadmap signals a shift from local development tools to production infrastructure, and developers who understand the protocol now will have a significant advantage. Build something, break it, fix it, and ship it โ that's how you actually learn this stuff.