Skip to content

Company

Anthropic

An AI safety-focused lab founded by ex-OpenAI researchers (Dario and Daniela Amodei), creator of Claude — known for Constitutional AI and a research-heavy safety culture.

Anthropic is a San Francisco AI lab founded in 2021 by Dario Amodei (CEO), Daniela Amodei (President), and several other ex-OpenAI researchers. The company was founded specifically with AI safety as the core mission — they believed OpenAI was moving too fast on capability without enough safety work. Major investors include Google ($2B+) and Amazon ($4B+). It matters because Anthropic is the second-largest frontier model lab and arguably the most safety-focused. Their flagship model, Claude, is now used by millions, and the company publishes substantial research on alignment, interpretability, and Constitutional AI. They are widely regarded as having the most rigorous approach to making capable models also be safe. Key contributions: Constitutional AI (training method using AI feedback), interpretability research (understanding what's inside neural networks), Responsible Scaling Policy (graduated safety commitments tied to model capability levels), and the Claude product line — Opus (most capable), Sonnet (balanced), Haiku (fast and cheap). Anthropic is also notable for its Public Benefit Corporation status — meaning the legal mandate isn't just shareholder returns but also "safe, beneficial AI". Whether that holds up under commercial pressure is one of the most-watched questions in the industry. Related: Claude family, Constitutional AI, Dario Amodei, alignment, OpenAI.

Last updated: 2026-04-29

We use cookies

Anonymous analytics help us improve the site. You can opt out anytime. Learn more