Example: Multi-Model Setup
How to offer different AI models to your users.
Overview
In this example, we'll configure LibreApps Desktop to allow users to switch between a "Fast" model (Claude 3 Haiku) and a "Pro" model (GPT-4).
Implementation Steps
1. Configure LiteLLM
Update your config.yaml to include both models:
model_list:
- model_name: fast-chat
litellm_params:
model: anthropic/claude-3-haiku-20240307
- model_name: pro-chat
litellm_params:
model: openai/gpt-4
2. Update the Chat Server
Ensure the ai-chat-server can accept a model parameter in its request body and pass it to LiteLLM.
3. Add a Model Selector to the Frontend
In your dashboard, add a dropdown or toggle that allows the user to select their preferred model.
const [selectedModel, setSelectedModel] = useState('fast-chat');
// ... in the JSX
<select value={selectedModel} onChange={(e) => setSelectedModel(e.target.value)}>
<option value="fast-chat">Fast (Claude 3 Haiku)</option>
<option value="pro-chat">Pro (GPT-4)</option>
</select>
4. Pass the Model to the Smash Widget
Update the SmashChatWidget to include the selected model in its requests:
<SmashChatWidget
model={selectedModel}
/>
Why this Matters
- User Choice: Allow users to prioritize speed or quality based on their needs.
- Cost Management: Encourage users to use the "Fast" model for simple tasks to save on API costs.
- A/B Testing: Compare the performance and user satisfaction of different models in real-time.
Best Practices
- ✅ Do this: Provide clear descriptions of the differences between the models (e.g., "Fast" vs. "Accurate").
- ✅ Do this: Set a sensible default model for all users.
- ❌ Don't do this: Offer too many models; it can overwhelm users and make the interface cluttered.