Skip to content

LLM Deploy

BerriAI/litellm

BerriAI/litellm· Python

Unified OpenAI-format SDK and proxy gateway for 100+ LLM providers, with cost tracking and routing.

GitHub stats

Stars
45,219
Forks
7,655
Watchers
196
Open issues
2,765

meta

License
NOASSERTION
Primary language
Python
Last commit
2026-04-29
Stats fetched at
2026-04-29

LiteLLM exposes 100+ LLM providers (OpenAI, Anthropic, Bedrock, Vertex, Azure, vLLM, NIM, HuggingFace, etc.) behind a single OpenAI-compatible API. Use it as a Python SDK (`litellm.completion(...)`) for direct calls, or run the proxy server as a centralized AI Gateway with virtual keys, budgets, rate limits, fallbacks, load balancing, guardrails, and logging to Langfuse/Datadog/S3. The typical path: `pip install litellm`, point existing OpenAI clients at the proxy, and swap models by changing one string.

Editor's verdict

The default pick when your team needs multi-provider access with central key management and cost attribution—Proxy mode is essentially production-ready and saves you from building an internal gateway. The SDK is fine for quick scripts but adds a dependency layer that occasionally lags behind upstream provider features (new tool-calling fields, beta endpoints). Skip it if you only ever call one provider, or if you want a Go/Rust gateway for tail latency—look at Portkey or OpenRouter as managed alternatives.

Last updated: 2026-04-29

We use cookies

Anonymous analytics help us improve the site. You can opt out anytime. Learn more

BerriAI/litellm · BuilderWorld