In depth
A tool call is the fundamental mechanism by which AI agents take action. Modern LLMs (Claude, GPT-4, Gemini) can emit structured 'tool use' messages that name a tool and provide JSON arguments matching that tool's schema. The host catches this, dispatches it to the right MCP client, and the client sends a `tools/call` JSON-RPC request to the server.
The server executes the tool — which might mean querying a database, calling a SaaS API, reading a file, or running a script — and returns a structured result. This result is fed back to the LLM as the next turn of the conversation, and the LLM can then reason about what to do next.
Tool calls can be parallelized (Claude Opus 4 can emit multiple tool calls in one turn) and chained (the result of one tool informs the next). They are the atomic unit of agentic behavior.
In MCP, tool call semantics are identical regardless of server — a GitHub tool call looks the same as a Stripe tool call from the protocol's perspective.