Skip to content

Technique

Prompt engineering

The craft of writing prompts that consistently get useful, accurate output from an LLM — covering structure, examples, role framing, and constraints.

Prompt engineering is the practice of designing the text you feed an LLM to maximize the chance it does what you want. It covers everything from where to put instructions, how many examples to include, what role to give the model, what constraints to enforce, and how to format the output you want back. It matters because the same task can produce wildly different results depending on how you phrase the prompt. "Summarize this" gives generic output; "Summarize this in three bullets aimed at a busy executive, focused on financial impact" gives something usable. Models are extremely sensitive to phrasing, ordering, and examples, and small changes can flip success rates from 60% to 95%. A simple example: extracting structured data. A bad prompt: "give me the data". A better one: "Extract company name, funding amount, and round (Series A/B/C/etc.) from the text below. Return JSON with keys company, amount_usd, and round. If a field is missing, use null." The second one specifies output schema, handles missing fields, and disambiguates currency. Proven techniques: few-shot examples, chain-of-thought, role/persona framing, explicit output format, negative constraints, and prefilling the assistant's response. Anthropic, OpenAI, and Google all publish official prompt engineering guides for their models — start there, not random Twitter threads. Related: few-shot, chain-of-thought, system prompt, prompt injection.

Last updated: 2026-04-29

We use cookies

Anonymous analytics help us improve the site. You can opt out anytime. Learn more