40 definitions covering the full vocabulary of the Model Context Protocol (MCP) — servers, clients, hosts, transports, tools, resources, prompts, OAuth, sampling, and the AI agent concepts that surround MCP. A factual reference for developers building with Claude Code, Cursor, Windsurf, Claude Desktop, and VS Code Copilot.
An AI agent is an autonomous software system powered by an LLM that can plan, take actions via tools, observe results, and iterate toward a goal. Unlike a chatbot that just replies, an agent does things: it calls APIs, edits files, spawns sub-tasks, and makes decisions without step-by-step human guidance.
An agentic workflow is a multi-step task executed autonomously by an AI agent using a loop of planning, tool calls, and self-evaluation. Unlike linear scripts, agentic workflows adapt — the agent decides each step based on prior results, replans on failure, and can spawn sub-agents for parallel work.
A context window is the maximum number of tokens (roughly, words + punctuation chunks) an LLM can process in a single inference. It includes the system prompt, conversation history, tool schemas, retrieved docs, and the new user message. Modern models range from 128K (GPT-4) to 1M+ (Claude Opus 4 1M, Gemini 1.5 Pro).
Claude Code is Anthropic's official AI coding CLI, released in 2025. It runs in your terminal, reads and edits files in your project, executes shell commands, and uses MCP servers to extend its capabilities. It's an MCP host purpose-built for software engineering workflows.
Cursor is an AI-first code editor forked from VS Code, built by Anysphere. It provides in-editor chat, auto-complete, multi-file refactoring, and MCP server integration via `.cursor/mcp.json`. It's one of the most popular MCP hosts among developers.
Claude Desktop is Anthropic's official desktop app for macOS, Windows, and Linux. It's the flagship MCP host — MCP was first introduced in November 2024 alongside Claude Desktop's MCP integration. Users configure MCP servers in `claude_desktop_config.json` and the app spawns them at launch.
The Model Context Protocol (MCP) is an open standard introduced by Anthropic in November 2024 that lets AI applications connect to external tools, data sources, and systems through a unified interface. It uses JSON-RPC 2.0 and standardizes how LLMs call tools, fetch resources, and use prompts.
An MCP server is a program that exposes tools, resources, or prompts to AI clients over the Model Context Protocol. It acts as the adapter between an external system (a database, SaaS, or filesystem) and any MCP-compatible AI agent like Claude Code or Cursor.
An MCP client is the component inside an AI application that connects to one or more MCP servers, discovers their capabilities, and relays tool calls between the LLM and the servers. Clients live inside hosts like Claude Desktop, Cursor, or Windsurf.
An MCP host is the top-level application that runs the LLM and coordinates one or more MCP clients. Claude Desktop, Claude Code, Cursor, Windsurf, and VS Code Copilot are all MCP hosts. The host handles user interaction, prompt composition, and dispatches tool calls to the right client.
An MCP tool is a callable function exposed by an MCP server. Each tool has a name, description, and JSON Schema for its input. LLMs discover available tools at session start and call them by name with matching arguments.
An MCP resource is a read-only piece of data exposed by a server, identified by a URI. Resources represent files, database records, API responses, or anything the AI can reference. Unlike tools, resources are passive — clients fetch them, servers don't execute side effects.
An MCP prompt is a reusable, server-defined message template that clients can surface to users as slash commands or menu items. Prompts let servers ship best-practice workflows (like `/review-pr` or `/debug-error`) that pre-compose context and instructions for the LLM.
An MCP transport is the underlying communication channel between client and server. MCP defines three transports: stdio (local subprocess), SSE (Server-Sent Events over HTTP, now legacy), and streamable HTTP (the modern remote transport). All use JSON-RPC 2.0 messages.
An MCP capability is a feature flag exchanged during the initialization handshake, declaring what an MCP server supports (tools, resources, prompts, sampling, logging, etc.) and what the client supports. Capability negotiation makes MCP extensible — new features can be added without breaking existing implementations.
An MCP session is the stateful connection between a client and a server, starting with the `initialize` handshake and ending with `shutdown`. Each session has its own capabilities, session ID (for remote transports), and lifecycle — servers typically keep per-session state like subscriptions or progress tokens.
MCP sampling lets a server request a completion from the host's LLM mid-session. This inverts the usual flow: normally the LLM calls server tools, but with sampling the server can ask the LLM to reason, summarize, or classify — enabling servers to build mini-agents of their own.
MCP roots are URI boundaries (usually filesystem paths) that the client exposes to servers, telling them which locations they're allowed to access. Roots are how a client says 'you can read these project folders, not the rest of my disk'. They're the foundation of filesystem scoping in MCP.
`initialize` is the first JSON-RPC method called on every MCP session. The client sends its protocol version, capabilities, and client info; the server responds with its version, capabilities, and server info. Tool calls and resource reads only work after a successful initialize + `notifications/initialized` handshake.
An MCP SDK is a language-specific library that implements the Model Context Protocol so you can build servers and clients without writing JSON-RPC by hand. Official SDKs exist for TypeScript, Python, Kotlin, Rust, C#, and Swift; community SDKs cover Go, Ruby, Java, and more.
MCP Inspector is the official debugging and exploration tool for MCP servers, maintained at `github.com/modelcontextprotocol/inspector`. It's a web UI that connects to any MCP server (stdio or remote), lists its tools and resources, and lets you invoke them manually — essential for testing servers during development.
An MCP marketplace is a curated catalog of MCP servers where developers can browse, install, and manage servers for their AI clients. MCPizy, Anthropic's MCP registry, and Smithery are examples. Marketplaces provide discoverability, install automation, and quality signals (downloads, ratings).
The MCPizy CLI is a command-line tool for installing, managing, and discovering MCP servers. Commands like `mcpizy install github` or `mcpizy list` automate the tedious parts of MCP setup — editing client configs, managing credentials, and keeping servers up to date across Claude Desktop, Claude Code, and Cursor.
An MCP registry is a central index of available MCP servers, their metadata, versions, and install commands. Anthropic maintains an official MCP registry; MCPizy, Smithery, and Glama run community registries. Registries are the backbone of MCP discovery and automated install tooling.
A progress notification is a server-to-client message that reports incremental progress for a long-running tool call. The client includes a `progressToken` when calling the tool, and the server sends `notifications/progress` messages referencing that token while the call is in flight.
Prompt engineering is the practice of designing inputs (prompts) to LLMs to reliably produce desired outputs. It spans system prompts, few-shot examples, structured formats, tool descriptions, and chain-of-thought patterns. Good prompts make unreliable models reliable.
Stdio transport is the default MCP transport for local servers. The client launches the server as a subprocess and exchanges JSON-RPC messages over its stdin (client→server) and stdout (server→client) streams. It's simple, fast, and the preferred transport for locally installed MCP servers.
SSE transport was MCP's original remote transport. It uses Server-Sent Events (HTTP streaming) for server-to-client messages and separate HTTP POST endpoints for client-to-server messages. As of 2025, it's deprecated in favor of streamable HTTP, which is simpler and more deployable.
A tool call is when an LLM decides to invoke one of its available tools (functions) with specific arguments, rather than returning text. In MCP, a tool call is executed by sending a `tools/call` JSON-RPC request from the client to the server and receiving a structured result.
Tool use is the Anthropic term for what OpenAI calls 'function calling' — an LLM's ability to invoke pre-defined tools with structured arguments. In Claude's Messages API, tool use manifests as `tool_use` content blocks that the host executes and responds to with `tool_result` blocks.
A vector database is a specialized store optimized for searching high-dimensional vectors by approximate nearest neighbor (ANN) algorithms. It's the storage layer for embeddings, powering semantic search, RAG, recommendation systems, and similarity matching in AI applications.
VS Code with GitHub Copilot is Microsoft's pairing of the world's most popular code editor with GitHub's AI coding assistant. As of 2025, Copilot supports MCP via an official extension — making VS Code a first-class MCP host alongside Claude Code, Cursor, and Windsurf.
Browse 300+ MCP servers or explore workflow recipes.