LiteLLM Integration
One API for all AI models.
Overview
LibreApps Desktop uses LiteLLM as a proxy layer between your application and various AI model providers. This allows you to use a single, consistent API to interact with models from OpenAI, Anthropic, Google, Azure, and many others.
Key Benefits
- Unified API: Use the same code to call different models, regardless of the provider.
- Easy Model Switching: Change models with a simple configuration update, no code changes required.
- Cost Tracking: LiteLLM provides built-in tools for tracking token usage and costs across all your models.
- Load Balancing: Distribute requests across multiple model instances or providers for better performance and reliability.
- Fallbacks: Automatically switch to a backup model if your primary model is unavailable or fails.
Configuration
LiteLLM is typically configured via a config.yaml file or environment variables. In LibreApps Desktop, this configuration is managed by the AI Chat Server.
Example Configuration
model_list:
- model_name: gpt-4
litellm_params:
model: openai/gpt-4
api_key: "os.environ/OPENAI_API_KEY"
- model_name: claude-3
litellm_params:
model: anthropic/claude-3-opus-20240229
api_key: "os.environ/ANTHROPIC_API_KEY"
Usage in LibreApps Desktop
When you use LibreApps Desktop AI Chat widget or the AI Services API, you specify the model_name defined in your LiteLLM configuration. The AI Chat Server then forwards the request to LiteLLM, which handles the communication with the actual provider.
Best Practices
- ✅ Do this: Use LiteLLM to abstract away provider-specific details and make your application more flexible.
- ✅ Do this: Set up fallbacks for critical AI features to ensure high availability.
- ❌ Don't do this: Hardcode provider-specific API keys in your frontend; always use the LiteLLM proxy.