Skip to content

LLM Deploy

langgenius/dify

langgenius/dify· TypeScript

Self-hostable low-code platform for building RAG pipelines, agents, and LLM workflows via a visual canvas.

GitHub stats

Stars
139,637
Forks
21,900
Watchers
788
Open issues
820

meta

License
NOASSERTION
Primary language
TypeScript
Last commit
2026-04-29
Stats fetched at
2026-04-29

Dify is a full-stack platform for building and shipping LLM apps without writing much glue code. It bundles a visual workflow canvas, prompt IDE, RAG pipeline, agent framework, dataset management, observability, and an OpenAI-compatible API behind one Next.js + Python service. Typical path: docker-compose up, connect your model provider (OpenAI / Claude / Gemini / local Ollama / vLLM), upload knowledge base, drag nodes to build a workflow, then expose it as an API or embedded chatbot. Supports MCP tools and multi-tenant deployments.

Editor's verdict

Pick Dify when you need to ship an internal RAG chatbot or workflow tool fast and want non-engineers to iterate on prompts and knowledge bases through a UI. It's the most batteries-included option in this space — closer to a product than a library, vs. LangGraph or LlamaIndex which are SDKs. Trade-off: you're committing to their data model and node abstractions; deeply custom agent logic still ends up as a "Code" node, and self-hosting the full stack (API, worker, vector DB, sandbox) is heavier than running a Python script. Skip it if you only need an SDK or your team already lives in code.

Last updated: 2026-04-29

We use cookies

Anonymous analytics help us improve the site. You can opt out anytime. Learn more

langgenius/dify · BuilderWorld