Skip to content

Technique

Chain-of-thought (CoT)

A prompting technique that gets the model to write out its reasoning step by step before giving the final answer, dramatically improving performance on math and logic tasks.

Chain-of-thought prompting is the trick of asking the model to "think step by step" — to write out intermediate reasoning before producing the final answer. Adding that single phrase to a prompt can take an LLM from getting math problems wrong half the time to getting them right. It matters because LLMs generate text token by token. If the answer requires multi-step reasoning and you ask only for the final number, the model has to compress the whole chain into a single forward pass and often skips a step. By writing the steps out, the model uses its own intermediate text as scratchpad — each step builds on the last. The canonical example is a word problem: "Roger has 5 tennis balls. He buys 2 cans of 3 balls each. How many does he have now?" A zero-shot prompt might just spit "11". A chain-of-thought prompt — "Let's think step by step: Roger starts with 5 balls. 2 cans × 3 balls = 6 new balls. 5 + 6 = 11" — gets it right far more reliably, especially on harder problems. Most reasoning-focused models now do CoT internally (OpenAI o1, DeepSeek R1, Claude with extended thinking) without you asking. Related: zero-shot CoT, self-consistency, ReAct (CoT + tool use), and tree-of-thoughts.

Last updated: 2026-04-29

We use cookies

Anonymous analytics help us improve the site. You can opt out anytime. Learn more

Chain-of-thought (CoT) · BuilderWorld