If you've spent any time in AI Twitter you've seen the same loop: someone posts a benchmark result, someone else screenshots it and writes "AGI is here," and a third person quote-tweets "that's not even close to ASI." It looks like a real argument. It's mostly people using the same letters to mean different things.
Let's untangle.
AGI: Artificial General Intelligence
The original idea, going back to the 1950s: a machine that can perform any intellectual task a human can. Not narrow. Not specialized. General.
The word "general" is doing a lot of work. There are at least four definitions in active use:
- Economic AGI (OpenAI's contractual definition): an AI system that can do most economically valuable human work.
- Cognitive AGI: matches human performance across a broad battery of cognitive tasks (reasoning, planning, language, math, perception).
- Autonomous AGI: can be set a high-level goal and pursue it for hours or days without supervision.
- DeepMind's levels framework: levels 0 through 5, where "competent AGI" is at-or-above the median skilled adult on most non-physical tasks.
In practice, when someone says "AGI is here" they usually mean one of these without specifying which. Hence the arguments.
ASI: Artificial Superintelligence
ASI is what comes after AGI. A system that is significantly better than the best humans across essentially all cognitive tasks. Not just "as smart as a smart person" — substantially beyond.
The word "superintelligence" was popularized by Nick Bostrom in 2014. His framing distinguishes:
- Speed superintelligence — same intellect as a human but thinks 1000× faster.
- Collective superintelligence — many sub-human-level intellects working together (think: well-coordinated agent swarm).
- Quality superintelligence — qualitatively smarter, the way a human is qualitatively smarter than a chimp.
Most ASI conversations care about quality. The fear and hope: a system that can do science, engineering, and strategy faster and better than the best humans alive.
Where 2026 models actually sit
The honest answer is: in the messy middle of the AGI definitions, depending on which one you pick.
- Cognitive battery. Frontier 2026 models (Claude 4.7 Opus, GPT-5, Gemini 3 Ultra) match or beat the median educated adult on most written tasks: legal reasoning, code, math up to graduate level, language translation, summarization. They struggle on long-horizon planning, novel research, and tasks requiring real-world physical context.
- Economic. Models genuinely automate parts of programming, customer support, copywriting, translation, image generation. They don't yet replace whole roles end-to-end without a human supervisor.
- Autonomous. Coding agents (Claude Code, Cursor agent mode) can run for tens of minutes on isolated tasks. They cannot run for days on open-ended goals without losing the plot. This is currently the biggest gap.
- DeepMind level. Probably "competent" on text-only tasks; well below "expert" or "virtuoso" on most domains.
Someone using the cognitive definition will tell you AGI is basically here. Someone using the autonomous definition will tell you we're miles off. Both are correct under their definition.
Why ASI is a different argument
The AGI debate is empirical: do current models meet definition X? You can run benchmarks.
The ASI debate is mostly about extrapolation:
- How fast does capability scale with compute and data? Scaling laws so far suggest steady gains; some researchers think we'll hit a wall before ASI; others think reasoning RL changes the slope.
- Does recursive self-improvement happen? A model good enough to do AI research could in principle help build the next generation faster. This is the "intelligence explosion" scenario. No empirical evidence yet that it produces a hard takeoff.
- What does "smarter than humans at everything" even look like? It might look like a calmer, faster Claude. Or it might look like something we can't anticipate. The honest answer: we don't know.
This is why the ASI conversation goes nowhere on Twitter. It's not a factual disagreement, it's a forecasting one.
Why this matters even if you don't care about Bostrom
Three practical reasons to keep AGI and ASI clear in your head:
- Product positioning. If your AI product's pitch leans on "AGI is around the corner," you're betting on a definition. Be specific. "AGI-level coding assistant" means something. "AGI is here" doesn't.
- Hiring and timelines. OpenAI's deal with Microsoft has an AGI clause. So does Anthropic's structure. So do labor predictions. The number of jobs at risk in 2030 depends on which definition you use.
- Safety priorities. What you should worry about is very different under "AGI in 2027 means a smart writer" vs "AGI in 2027 means an autonomous research scientist."
A pragmatic take
For day-to-day building: ignore both terms. Ask instead, what can the model in front of me actually do, today? Run it on your real tasks. Measure its failure modes. Pick the cheapest one that works.
For strategy and policy conversations: keep your definition explicit. "By AGI I mean the OpenAI-Microsoft contractual sense, namely a system that produces $100B in profit" is a defensible claim. "AGI is coming" is rhetoric.
When NOT to use these terms
Avoid AGI and ASI in product copy. Most readers will either (a) think you mean it literally and accuse you of overclaiming, or (b) read it as marketing fluff and tune out. Specific capability claims ("can solve 84% of AIME 2025 problems") land better.
Further reading
- Levels of AGI — DeepMind paper, 2023.
- Nick Bostrom, Superintelligence — for the original ASI taxonomy.
- The OpenAI-Microsoft AGI clause leaks (search 2024).
- Look up: scaling laws, intelligence explosion, transformative AI.