Home Glossary
HomeGlossaryFunction Calling
MCP Glossary

Function Calling

TL;DR

Function calling is the LLM capability to emit structured requests to invoke pre-defined functions with JSON arguments, rather than returning free-form text. Function calling, popularized by OpenAI in 2023, is the foundation of tool use and the precursor to MCP.

In depth

Function calling lets an LLM transform 'I need to get the weather' into a structured call: `{ name: 'get_weather', args: { location: 'Paris' } }`. The model is trained to choose between free-text output and structured function invocation based on context. When it picks a function, the host executes it and feeds the result back.

OpenAI introduced function calling in June 2023 as a way to bridge chat models and real-world actions. Anthropic shipped 'tool use' with similar semantics shortly after. Gemini, Mistral, and nearly every modern frontier model now support it. The API shapes differ slightly, but the concept is universal.

Function calling operates at the model-API level — the LLM emits tool calls and the client executes them. MCP sits one layer up: it standardizes WHERE those tools come from. An MCP tool IS a function the LLM can call; the MCP server provides the implementation.

Think of function calling as the 'language' and MCP as the 'library'. They work together: Claude emits function calls in Anthropic format, the host translates these into MCP `tools/call` requests to the right server.

Examples

  • 1
    OpenAI's `tools` parameter listing callable functions in chat API
  • 2
    Anthropic's `tool_use` content block in Claude Messages API
  • 3
    Gemini's `functionDeclarations` in generateContent requests
  • 4
    A tool call: `{ name: 'search_web', args: { query: 'MCP news' } }`
  • 5
    A tool result fed back as the next user message turn

What it's NOT

  • ✗Function calling is NOT new in 2024 — OpenAI shipped it in June 2023.
  • ✗Function calling is NOT the same as MCP — MCP defines the source of tools; function calling invokes them.
  • ✗Function calling is NOT always reliable — models sometimes hallucinate tool names or args.
  • ✗Function calling is NOT limited to one call per turn — modern models emit parallel calls.

Related terms

Tool UseTool CallMCP ToolAI AgentLarge Language Model (LLM)

See also

  • OpenAI Function Calling
  • Anthropic Tool Use

Frequently asked questions

Is function calling the same as tool use?

Functionally yes — OpenAI calls it 'function calling', Anthropic calls it 'tool use', Gemini calls it 'function declarations'. Same concept.

Do I need MCP to do function calling?

No — function calling works without MCP. MCP is a standardized way to distribute tools across apps; function calling is the underlying model capability.

How reliable is function calling?

Modern models (Claude 4, GPT-4, Gemini 2.5) are very reliable — 95%+ tool-call correctness on well-specified schemas.

Build with MCP

Browse 300+ MCP servers, explore recipes, or continue learning the MCP vocabulary.

Browse MarketplaceAll terms