Skip to content

Technique

ReAct (Reason + Act)

An agent loop where the model alternates between writing a reasoning step ("Thought") and choosing a tool to call ("Action"), using the result before reasoning again.

ReAct is a prompting pattern from a 2022 Google paper that combines reasoning (chain-of-thought) with tool use. The model writes a Thought, picks an Action (a tool call with arguments), receives an Observation (the tool's result), and loops — Thought, Action, Observation, Thought, Action, Observation — until it has the answer. It matters because pure chain-of-thought reasoning has no way to look things up. ReAct lets the model query Wikipedia, search the web, run code, hit a database, or call any other tool mid-reasoning. That's the foundation for nearly every modern LLM agent: the assistant works out what it needs, fetches it, then continues thinking. A simple example: "What's the population of the country whose capital is Vientiane?" The model thinks "I need to know what country's capital is Vientiane", calls a search tool, gets back "Laos", thinks again "Now I need Laos's population", searches again, gets "7.5 million", and answers. Most agent frameworks (LangChain, LlamaIndex, AutoGPT, Anthropic's tool use) implement some flavor of ReAct under the hood. Modern function-calling APIs are essentially ReAct made native, with the Thought/Action format replaced by structured tool calls. Related: tool use, function calling, agent, chain-of-thought.

Last updated: 2026-04-29

We use cookies

Anonymous analytics help us improve the site. You can opt out anytime. Learn more