MODELS
DeepSeek R1
Open-weight reasoning model that matches o1 on math and code at a fraction of the price.
Specs
- Context window
- 128,000
- Max output
- 32,768
- Modalities
- text
- Tool use
- —
- Vision
- —
- Streaming
- ✓
- License
- mit
- Released
- 2025-01-20
Pricing
- Input / 1M
- $0.55
- Output / 1M
- $2.19
- Cached input / 1M
- $0.14
Cost estimate
DeepSeek R1 is an MIT-licensed reasoning model trained with large-scale reinforcement learning, producing long chain-of-thought traces before its final answer. It targets math, coding, and logic problems where o1-class models pull ahead of standard chat LLMs. The 128K context and open weights make it usable for self-hosting, distillation, or fine-tuning. No tool calling, no vision — pure text reasoning.
Editor's verdict
Pick R1 when you need o1-level reasoning but want open weights or a 10× cheaper API — it's the first credible open alternative to closed reasoning models. Trade-offs are real: no tool use, no vision, and the verbose thinking traces eat output tokens and latency. For agent workflows or multimodal tasks, look elsewhere; for hard math/code prompts on a budget, it's the obvious choice.
Reviews
No reviews yet. Be the first.
Last updated: 2026-04-29