Skip to main content

AI Usage Tracking

Monitor and optimize your AI infrastructure.

Overview

Tracking AI usage is essential for managing costs, identifying performance issues, and understanding how users are interacting with your AI features. LibreApps Desktop provides built-in tools for logging and analyzing AI requests.

Key Metrics

  • Token Usage: Track the number of input and output tokens for each request.
  • Latency: Measure the time it takes for the AI to respond.
  • Cost: Calculate the estimated cost of each request based on the provider's pricing.
  • Success Rate: Monitor the percentage of successful vs. failed AI requests.
  • User Activity: Identify which users are using AI features most frequently.

How it Works

The LiteLLM proxy automatically logs every request and response to a database (e.g., PostgreSQL or Redis). This data can then be visualized using tools like Grafana or accessed via a custom management dashboard.

Configuration

To enable usage tracking in LiteLLM, update your config.yaml:

litellm_settings:
success_callback: ["langfuse", "prometheus"]
failure_callback: ["langfuse"]

In this example, LiteLLM is configured to send usage data to Langfuse (for detailed tracing) and Prometheus (for real-time monitoring).

Best Practices

  • Do this: Set up alerts for unusual spikes in AI usage or costs.
  • Do this: Use usage data to identify opportunities for model optimization (e.g., switching to a cheaper model for certain tasks).
  • Don't do this: Ignore usage tracking until you receive a large bill from your AI provider.