Skip to content

Model family

Llama (family)

Meta's open-weight LLM family — Llama 1, 2, 3, 4 — the foundational open-source model line that the modern self-hostable AI ecosystem is built on.

Llama is Meta's open-weight LLM family — the most influential open-source model line in modern AI. Llama 1 (Feb 2023) was originally released only to researchers but leaked widely; Llama 2 (July 2023) was released openly with a permissive commercial license; Llama 3 (April 2024) raised quality dramatically; Llama 4 launched in 2025. It matters because Llama is the foundation of the open-source LLM ecosystem. Almost every locally-runnable model — Mistral fine-tunes, the early Yi models, Vicuna, Wizard, dozens of code variants — is either based on Llama, fine-tuned from Llama, or deeply influenced by Llama's architecture and tokenizer. Without Meta's decision to open-weight, self-hosted AI would look completely different. The family includes models at multiple scales — typically 8B, 70B, and a flagship 400B+ class — plus specialized variants (Code Llama, Llama Guard for safety filtering). Sizes from a few billion parameters up are designed to be runnable with quantization on consumer hardware. Llama is governed by Meta's custom "Llama Community License" — open enough for almost all use cases but with some restrictions (e.g., for very-large companies competing with Meta). It's not OSI-certified open-source in the strict sense, but it's open by industry convention. Related: Meta AI, Yann LeCun, open-source, Mistral, Qwen.

Last updated: 2026-04-29

We use cookies

Anonymous analytics help us improve the site. You can opt out anytime. Learn more