Home Glossary
HomeGlossaryTool Use
MCP Glossary

Tool Use

TL;DR

Tool use is the Anthropic term for what OpenAI calls 'function calling' — an LLM's ability to invoke pre-defined tools with structured arguments. In Claude's Messages API, tool use manifests as `tool_use` content blocks that the host executes and responds to with `tool_result` blocks.

In depth

Tool use is how Claude reaches outside the conversation. In the Anthropic Messages API, you declare a `tools` array with each tool's name, description, and input schema. Claude, during inference, can emit a `tool_use` content block: it says 'I want to call `get_weather` with `{ location: 'Paris' }`.' The host executes that call and sends a follow-up message with a `tool_result` block containing the output.

Tool use is foundational to agentic behavior. Without it, Claude can only talk; with it, Claude can read files, query APIs, run code, and interact with the world. The MCP ecosystem is essentially a way to distribute ready-made tools that Claude (and other LLMs) can use.

Claude's tool use has a few powerful extensions: **parallel tool calls** (Claude 4+ can emit multiple tool_use blocks per turn), **tool choice control** (force a specific tool or let Claude choose), and **streaming tool use** (the tool_use block streams as it's generated).

In MCP deployments, tool use connects directly to MCP tools. The host passes MCP tool definitions into Claude's `tools` parameter; when Claude emits tool_use, the host translates it into an MCP `tools/call` request.

Code example

// Anthropic Messages API — declare tools
{
  "model": "claude-opus-4",
  "tools": [
    {
      "name": "get_weather",
      "description": "Get current weather for a location",
      "input_schema": {
        "type": "object",
        "properties": { "location": { "type": "string" } },
        "required": ["location"]
      }
    }
  ],
  "messages": [{ "role": "user", "content": "What's the weather in Paris?" }]
}

// Claude's response — tool_use content block
{
  "content": [
    {
      "type": "tool_use",
      "id": "toolu_01abc",
      "name": "get_weather",
      "input": { "location": "Paris" }
    }
  ]
}

Examples

  • 1
    Claude emitting `tool_use` to call a GitHub MCP's `create_issue`
  • 2
    Claude choosing between `read_file` and `search_code` based on the task
  • 3
    Parallel tool use: Claude emits 3 tool calls in one turn
  • 4
    Forced tool use: `tool_choice: { type: 'tool', name: 'search' }`
  • 5
    Chained tool use: call A → read result → call B → synthesize answer

What it's NOT

  • ✗Tool use is NOT unique to Anthropic — it's the general pattern behind function calling.
  • ✗Tool use does NOT require MCP — you can wire any function directly without MCP.
  • ✗Tool use is NOT a guaranteed call — the model might skip tools if confident in direct answers.
  • ✗Tool use responses are NOT automatic — the host must send back a `tool_result` block.

Related terms

Function CallingTool CallMCP ToolAI AgentLarge Language Model (LLM)

See also

  • Anthropic Tool Use

Frequently asked questions

How is tool use different from function calling?

They're the same concept, named differently. Anthropic = 'tool use'; OpenAI = 'function calling'; Gemini = 'function declarations'.

Can Claude use tools without MCP?

Yes — pass tools directly in the Messages API. MCP just standardizes how to distribute those tools across apps.

What's parallel tool use?

Claude 4+ can emit multiple tool_use blocks in one assistant message. The host executes them concurrently for speed.

Build with MCP

Browse 300+ MCP servers, explore recipes, or continue learning the MCP vocabulary.

Browse MarketplaceAll terms