Skip to content

Misc

System prompt

A special instruction at the start of an LLM conversation that sets the model's role, tone, behavior rules, and constraints for the rest of the session.

A system prompt is a separately-tagged instruction that sits before any user messages and tells the model how to behave for the rest of the conversation. "You are a friendly customer support agent for ACME corp. Always answer in JSON. Never discuss competitors." Modern APIs (OpenAI, Anthropic, Gemini) treat the system prompt as a distinct field, often weighted slightly differently in attention. It matters because it's how you turn a generic model into a specific product. ChatGPT's personality, Claude's helpful-and-harmless tone, custom GPTs, app-specific assistants — all driven by a system prompt the user doesn't see. It's the foundation of LLM product design. A concrete example: a coding assistant might have a system prompt like "You are an expert TypeScript developer. Always use strict typing. Prefer named exports. Never invent API endpoints — if unsure, say so. Output only the file content, no explanation." The user just types "add a debounce hook" and gets exactly the code they need. Caveats: system prompts are not strict rules. A determined user can sometimes bypass them with prompt injection or creative phrasing, especially for non-Anthropic models that are less robust to jailbreaks. For safety-critical rules, pair system prompts with output filters and tool-permission scopes. Related: prompt engineering, prompt injection, jailbreak.

Last updated: 2026-04-29

We use cookies

Anonymous analytics help us improve the site. You can opt out anytime. Learn more

System prompt · BuilderWorld