Home Glossary
HomeGlossaryTool Call
MCP Glossary

Tool Call

TL;DR

A tool call is when an LLM decides to invoke one of its available tools (functions) with specific arguments, rather than returning text. In MCP, a tool call is executed by sending a `tools/call` JSON-RPC request from the client to the server and receiving a structured result.

In depth

A tool call is the fundamental mechanism by which AI agents take action. Modern LLMs (Claude, GPT-4, Gemini) can emit structured 'tool use' messages that name a tool and provide JSON arguments matching that tool's schema. The host catches this, dispatches it to the right MCP client, and the client sends a `tools/call` JSON-RPC request to the server.

The server executes the tool — which might mean querying a database, calling a SaaS API, reading a file, or running a script — and returns a structured result. This result is fed back to the LLM as the next turn of the conversation, and the LLM can then reason about what to do next.

Tool calls can be parallelized (Claude Opus 4 can emit multiple tool calls in one turn) and chained (the result of one tool informs the next). They are the atomic unit of agentic behavior.

In MCP, tool call semantics are identical regardless of server — a GitHub tool call looks the same as a Stripe tool call from the protocol's perspective.

Code example

// JSON-RPC request from client to server
{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "tools/call",
  "params": {
    "name": "get_weather",
    "arguments": { "location": "Paris" }
  }
}

// Response
{
  "jsonrpc": "2.0",
  "id": 1,
  "result": {
    "content": [{ "type": "text", "text": "18°C, partly cloudy" }],
    "isError": false
  }
}

Examples

  • 1
    LLM calls `read_file(path='/src/app.ts')` to read your code
  • 2
    LLM calls `create_pull_request(title, body, base, head)` on GitHub
  • 3
    LLM calls `query(sql='SELECT COUNT(*) FROM users')` on Supabase
  • 4
    LLM calls `send_message(channel, text)` on Slack after research
  • 5
    LLM calls `search_web(query='latest MCP news')` on Tavily

What it's NOT

  • ✗A tool call is NOT a remote procedure call to the LLM — it's the LLM's output, which the host executes.
  • ✗A tool call is NOT always successful — servers can return `isError: true` and the LLM should handle failure.
  • ✗A tool call is NOT limited to read operations — tools can mutate state (send email, create issue, etc.).
  • ✗A tool call does NOT bypass user approval — hosts typically show tool calls before executing risky ones.

Related terms

MCP ToolFunction CallingTool UseJSON-RPC 2.0Model Context Protocol (MCP)

See also

  • MCP Tools Concept
  • Anthropic Tool Use

Frequently asked questions

Can an LLM make multiple tool calls at once?

Yes — modern Claude and GPT-4 models can emit parallel tool calls in a single turn, which the host executes concurrently.

What if a tool call fails?

The server returns `isError: true` with an error message. The LLM sees this and typically retries, asks for clarification, or adapts its plan.

Are tool calls logged?

Yes, most hosts log every tool call (name, arguments, result) for debugging and audit. Production agents should log with correlation IDs.

Build with MCP

Browse 300+ MCP servers, explore recipes, or continue learning the MCP vocabulary.

Browse MarketplaceAll terms