A prompt is the text you type into ChatGPT, Claude, or any other LLM. That's the whole definition. The reason it has its own buzzword ("prompt engineering") is that small wording changes produce wildly different answers — and most people leave a lot of quality on the table because they don't know what to vary.
What's actually in a prompt
When you type into a chat box, the system bundles three things and sends them to the model:
- The system prompt (set by the product, invisible to you) — "You are Claude, a helpful AI assistant made by Anthropic. Be concise, careful…" This shapes the model's tone and constraints.
- The conversation history — every message from this conversation, including the model's earlier replies.
- Your latest message — what you just typed.
In an API call you control all three. In a chat product you usually only see the third, but the first two still affect the answer.
This matters because if your conversation has drifted (you started discussing a recipe and now you're asking about Python), earlier context still influences the answer. If you want a clean response, start a new chat.
What makes a prompt good
There's no formula, but four levers consistently move quality.
Specificity. "Write me a marketing email" gets you a generic marketing email. "Write a 120-word marketing email to existing customers of a SaaS analytics tool, announcing that v2.0 ships next Tuesday with one new feature: scheduled reports. Tone: friendly but professional, no exclamation points" gets you something usable.
Context. Tell the model what role it's playing, who the audience is, what came before. "You are a senior copywriter reviewing my draft. Be blunt about what's weak." Then paste the draft.
Format. State the output format. "Give me a markdown table with columns: Pro, Con, When to use" beats hoping the model picks the right shape.
Examples. This is the most underused trick. Show the model one or two examples of the input/output you want. Models are great at pattern-matching from a few examples — far better than at following abstract instructions. Two well-chosen examples often beat 200 words of rules.
Common mistakes
Three patterns waste people's time.
Asking the model to do too much in one shot. "Write a complete business plan for my AI-powered pet food delivery service" gets a generic answer. Break it down: market analysis, then competitor list, then unit economics, then go-to-market. You can paste each result back into the next prompt.
Not telling the model what to refuse. Say "if you don't have enough information, ask me a clarifying question instead of guessing." Otherwise the model will hallucinate plausibly rather than admit gaps.
Using vague tone words. "Make it more professional" — what does that mean? More technical? More formal? Longer sentences? Better: "Cut the contractions, replace casual idioms with neutral phrasing, keep it under 200 words."
A practical prompt template
For any non-trivial ask, this skeleton works:
[Role or context]
You are a senior product manager reviewing a feature spec.
[Task]
Read the spec below and give me three concerns about scope, in order of severity.
[Constraints]
- Each concern: one sentence stating the issue, one sentence on impact.
- Don't suggest fixes.
- If the spec is too vague to evaluate, say so and ask me three clarifying questions.
[Format]
Return as a numbered markdown list.
[Input]
<paste spec here>
You don't need to fill every section every time, but having the template in your head means you'll notice when you skipped a piece that mattered.
When NOT to bother prompt-engineering
For casual chat ("what's a good Italian restaurant in Taipei?") just type the question. Optimizing prompts only pays off when:
- You're going to run the same kind of prompt many times (template, automation, app)
- The answer quality directly affects your work product
- You're hitting a wall where the default answer is consistently bad
For everything else, type naturally and iterate. "Iterate" is half of prompt engineering — you almost never get the best answer from your first message. Read the response, say what you wanted differently, and the model adjusts.
Further reading
- What is a Large Language Model (LLM)
- System prompt vs user prompt: who's in charge here
- What is a context window
- What is prompt injection (and why it's dangerous)
- Temperature, top-p, top-k: sampling parameters explained