Skip to content

Technique

Zero-shot prompting

Asking an LLM to do a task purely from a description, with no examples — relying only on what the model learned during pre-training.

Zero-shot prompting is the most basic form of prompting: you describe the task in natural language and the model does it, without any worked examples. "Translate this into Japanese" or "summarize this article in three sentences" are zero-shot prompts. It matters because modern LLMs were trained on so much text that they already know how to do many tasks just from being told. Zero-shot is the default starting point — try it first, and only add examples or fine-tuning if quality isn't there. Less prompt = lower cost and faster response. A concrete example: classify customer reviews as positive, negative, or neutral. A zero-shot prompt is literally just "Classify this review as positive, negative, or neutral: {text}". A few years ago this would have required a labeled dataset and a trained classifier. Today GPT-4 or Claude handles it in one line. When zero-shot fails, the typical fixes are: clearer task description, switching to few-shot with examples, breaking the task into chain-of-thought steps, or fine-tuning. Zero-shot performance is also a common benchmark for new models — "how well can it do tasks it was never explicitly trained on?" Related: few-shot prompting, in-context learning, instruction tuning.

Last updated: 2026-04-29

We use cookies

Anonymous analytics help us improve the site. You can opt out anytime. Learn more