Observability
Metrics
Analytics provide a high-level overview of how your application is using Generative AI. This includes metrics like costs, tokens, requests, latency, top models, etc.
Metrics
Metrics can be broken down by user, model, latency, status, etc. and filtered within a specific time period. The primary metrics we track are:
- Cost: Discover how expensive your LLM’s are for your business.
- Latency: Accurately measure how quick your LLM requests respond.
- Tokens: Count tokens used on input, output, and total.
- Models: View the breakdown of your model usage
- Users: Visualize how many users are active on your application
Have Questions?
We’re here to help! Choose the best way to reach us:
Join our Discord community for quick answers and discussions
Email us at hello@puzzlet.ai for support
Schedule an Enterprise Demo to learn about our business solutions
Was this page helpful?