AI Services Overview
The backend power behind LibreApps Desktop's AI features.
Overview
The AI Services layer in LibreApps Desktop is responsible for managing interactions with Large Language Models (LLMs). It provides a secure and scalable way to integrate AI into your application, handling everything from prompt engineering to usage tracking.
Key Components
1. AI Chat Server
A Python-based microservice that handles conversational AI requests. It manages chat history, context injection, and real-time streaming.
2. LiteLLM Proxy
A unified interface for connecting to 100+ LLM providers (OpenAI, Anthropic, Google, etc.). It allows you to switch models with a simple configuration change.
3. Model Management
Tools and configurations for defining which models are available to your users and how they should be used.
4. Usage Tracking
A system for monitoring and logging AI usage, allowing you to track costs and optimize performance.
Why Use AI Services?
- Security: Protect your API keys by keeping them on the backend.
- Flexibility: Easily switch between different LLM providers without changing your frontend code.
- Scalability: Handle high volumes of AI requests with a dedicated microservice.
- Control: Implement custom logic for prompt filtering, rate limiting, and response formatting.
Best Practices
- ✅ Do this: Use the AI Chat Server for all conversational features.
- ✅ Do this: Leverage LiteLLM to experiment with different models and find the best fit for your use case.
- ❌ Don't do this: Expose your LLM provider API keys directly to the frontend.