The core LLM task of producing free-form text in response to a prompt — covers chat, writing, completion, and any output that is itself natural language.
Text generation is the umbrella term for the LLM task of producing natural-language text given a prompt. It covers everything from finishing a sentence ("the quick brown fox...") to writing essays, reports, code, emails, summaries, dialogue, marketing copy, or anything else where the output is itself free-form text.
It matters because text generation is the default LLM use case — almost every consumer-facing AI product (ChatGPT, Claude, Gemini, Kimi) is fundamentally a text-generation interface. The breadth of tasks an LLM can do this way is what makes the technology so flexible: a single model handles drafting marketing copy, refactoring code, translating, brainstorming, and tutoring, all under the same generic interface.
A simple example: you give the model "Write a haiku about Taipei" and it produces three lines of poetry. Or "Continue this story..." and it adds a paragraph. Or "Reply to this customer email..." and it generates a draft. The same model handles all of these without task-specific training.
Under the hood, text generation is the model autoregressively predicting one token at a time, conditioned on everything that came before. Sampling parameters (temperature, top-p) control variety. Quality depends on the prompt, the model, and how decoder noise is managed. Related: chat, summarization, code generation, sampling, decoding.
We use cookies
Anonymous analytics help us improve the site. You can opt out anytime. Learn more