Skip to content

Technique

In-context learning (ICL)

The ability of an LLM to learn a new task from examples shown inside the prompt at inference time, without any weight updates.

In-context learning is the surprising ability of large language models to pick up new tasks just from examples in the prompt, without any training. Show GPT-4 three examples of a made-up classification scheme and it'll classify the fourth correctly — no fine-tuning, no gradient updates, just inference. The "learning" happens entirely in the model's forward pass. It matters because it's the foundation of few-shot prompting and a big reason LLMs are so flexible. With ICL, one model can do thousands of different tasks just by seeing different examples — translation, sentiment analysis, code style, made-up game rules. You don't need a separate model per task. It also turns prompt design into a kind of programming. A concrete example: invent a fake language where "glorp" means add and "snurf" means double. Show the model: "glorp 3 4 = 7. snurf 5 = 10. glorp 2 6 = ?" and it'll answer 8 with no training on your fake operators. The model has effectively learned your task from three examples in the prompt. Why it works is still partially a research question — papers describe it as the model implicitly running a meta-learning algorithm, or as gradient descent in activation space. Practically, it's why few-shot prompting works at all. Related: few-shot prompting, emergent abilities, scaling laws, meta-learning.

Last updated: 2026-04-29

We use cookies

Anonymous analytics help us improve the site. You can opt out anytime. Learn more