Model Management
Choosing and configuring the right LLMs for your dashboard.
Overview
LibreApps Desktop supports a wide range of Large Language Models through its integration with LiteLLM. You can choose the best model for your specific needs, whether it's high-performance reasoning, low-latency chat, or cost-effective processing.
Supported Providers
- OpenAI: GPT-4, GPT-3.5 Turbo
- Anthropic: Claude 3 (Opus, Sonnet, Haiku)
- Google: Gemini Pro, Gemini Ultra
- Meta: Llama 3 (via providers like Groq or Together AI)
- Mistral: Mistral Large, Mixtral 8x7B
- Self-Hosted: Any model compatible with the OpenAI API format (e.g., via vLLM or Ollama).
Choosing a Model
| Use Case | Recommended Model | Why? |
|---|---|---|
| Complex Reasoning | GPT-4 / Claude 3 Opus | Best-in-class logic and problem-solving. |
| Fast Chat | GPT-3.5 / Claude 3 Haiku | Low latency and high throughput. |
| Data Extraction | Gemini Pro | Large context window and strong structured output. |
| Privacy Focused | Llama 3 (Self-hosted) | Keep all data within your own infrastructure. |
Configuring Models
Models are defined in the LiteLLM config.yaml. You can assign friendly names to models to make them easier to reference in your code:
model_list:
- model_name: assistant-pro
litellm_params:
model: openai/gpt-4
- model_name: assistant-fast
litellm_params:
model: anthropic/claude-3-haiku-20240307
Best Practices
- ✅ Do this: Start with a versatile model like GPT-4 or Claude 3 Sonnet for development.
- ✅ Do this: Use smaller, faster models for simple tasks to improve user experience and reduce costs.
- ❌ Don't do this: Use a high-cost model for tasks that don't require complex reasoning.