Skip to content

Technique

Tool use / Function calling

Tool use / function calling

An LLM capability where the model decides to call external functions (search, code, APIs) and uses the results to produce its final answer.

Tool use — also called function calling — is the ability of an LLM to decide that it needs an external tool, emit a structured request to call that tool, receive the tool's output, and incorporate it into its final answer. You declare the available tools (with name, description, and JSON schema for parameters) up front; the model picks which to call and with what arguments. It matters because plain LLMs can't access fresh data, run code, or interact with external systems. Tool use turns a chat model into something that can search the web, query your database, run Python, send emails, or trigger any API endpoint. It's the building block of every useful agent. A concrete example: ask Claude "what's the weather in Taipei right now?". With tool use enabled and a `get_weather(city)` tool registered, Claude responds with a tool call: `{"name": "get_weather", "input": {"city": "Taipei"}}`. Your code runs the function, sends the result back, and Claude writes the natural-language answer. OpenAI, Anthropic, Gemini, and most open-source models now expose this as a first-class API feature. Tool use is what makes Cursor edit your code, Perplexity cite sources, and ChatGPT call DALL-E. Related: ReAct, function calling, agent, MCP (Model Context Protocol).

Last updated: 2026-04-29

We use cookies

Anonymous analytics help us improve the site. You can opt out anytime. Learn more

Tool use / Function calling · BuilderWorld