Observability
Overview
Monitor and debug your prompts with Puzzlet
Puzzlet builds on top of OpenTelemetry for collecting telemetry data from your prompts. This helps you monitor, debug, and optimize your LLM applications in production.
What We Track
Puzzlet automatically collects:
-
Inference Spans: Full lifecycle of prompt execution
- Token usage
- Response times
- Model information
- Completion status
- Cost
- Response status
-
Tool Calls: When your prompts use tools
- Tool name and parameters
- Execution duration
- Success/failure status
- Return values
-
Streaming Metrics: For streaming responses
- Time to first token
- Tokens per second
- Total streaming duration
Basic Usage
Enable telemetry in your Puzzlet client:
Learn More
For detailed information about spans, metrics, and custom configuration, see:
Have Questions?
We’re here to help! Choose the best way to reach us:
Join our Discord community for quick answers and discussions
Email us at hello@puzzlet.ai for support
Schedule an Enterprise Demo to learn about our business solutions
Was this page helpful?