Skip to content

Technique

Few-shot prompting

A prompting technique where you give the model a few worked examples in the prompt before asking it to do the same task on a new input.

Few-shot prompting means showing the model 2-5 input-output examples inside the prompt itself before giving it the real task. The model learns the pattern from the examples and applies it to your new input — no fine-tuning, no extra training, just the prompt. It matters because LLMs are surprisingly good at picking up patterns from a handful of demonstrations. If you want consistent output formatting, a specific tone, or a niche classification scheme, listing examples is often more reliable than describing the task in words. It's also the cheapest form of customization — you only pay for the extra tokens. A simple example: extracting structured data. Instead of writing "extract the company name and the funding amount from this news headline", you show three sample headlines with their extracted JSON, then paste the fourth headline. The model copies the format. The same trick works for translation style, code conventions, sentiment labels with custom categories, and almost any narrow task. The related ideas worth knowing: zero-shot prompting (no examples), one-shot (just one example), in-context learning (the underlying capability that makes few-shot work), and chain-of-thought (a way to embed reasoning steps into the examples).

Last updated: 2026-04-29

We use cookies

Anonymous analytics help us improve the site. You can opt out anytime. Learn more

Few-shot prompting · BuilderWorld