Alibaba's Qwen open-source LLM family — Qwen 1, 1.5, 2, 2.5, 3 — the most-downloaded Chinese open-weight model line on Hugging Face, with rapid release cadence.
Qwen (通義千問) is Alibaba's open-source LLM family. Generations: Qwen 1 (2023), Qwen 1.5, Qwen 2, Qwen 2.5 (a notable quality jump in late 2024), Qwen 3 (2025) and the larger Qwen3-Max. Sizes range from 0.5B parameters for edge use up to 235B+ MoE for frontier work.
It matters because Qwen is currently the most-downloaded Chinese open-weight LLM family on Hugging Face and arguably the most reliable open-source choice for Chinese-language work. The release cadence — new sizes, multimodal versions (Qwen-VL), audio (Qwen-Audio), reasoning (QwQ), code (Qwen-Coder), agent-tuned variants — is faster than any Western open-source effort.
Key strengths: strong bilingual Chinese-English performance, excellent on Chinese-specific benchmarks like C-Eval and CMMLU, and competitive with Llama on global benchmarks. Many Chinese-market builders default to Qwen for self-hosting.
Licensing varies by version — most recent open weights use Apache-2 or a similar permissive license, with some commercial-restricted exceptions for the largest models. Qwen also powers Alibaba's hosted Tongyi Qianwen consumer chat product and the Bailian enterprise platform. Available on Hugging Face, ModelScope, and major Chinese cloud providers. Related: Alibaba, Llama, DeepSeek, ModelScope, open-source.
We use cookies
Anonymous analytics help us improve the site. You can opt out anytime. Learn more