Observability
Monitor and debug your prompts using OpenTelemetry
AgentMark uses OpenTelemetry for collecting telemetry data. OpenTelemetry is an open-source observability framework that provides vendor-agnostic APIs, libraries, and instrumentation for collecting distributed traces and metrics.
Enabling Telemetry
Enable telemetry when calling runInference
:
Telemetry with Puzzlet
The easiest way to get started with observability is to use Puzzlet. Puzzlet automatically collects and visualizes telemetry data from your prompts:
Collected Spans
AgentMark records the following span types:
Span Type | Description | Attributes |
---|---|---|
ai.inference | Full length of the inference call | operation.name , ai.operationId , ai.prompt , ai.response.text , ai.response.toolCalls , ai.response.finishReason |
ai.toolCall | Individual tool executions | operation.name , ai.operationId , ai.toolCall.name , ai.toolCall.args , ai.toolCall.result |
ai.stream | Streaming response data | ai.response.msToFirstChunk , ai.response.msToFinish , ai.response.avgCompletionTokensPerSecond |
Basic LLM Span Information
Each LLM span contains:
Attribute | Description |
---|---|
ai.model.id | Model identifier |
ai.model.provider | Model provider name |
ai.usage.promptTokens | Number of prompt tokens |
ai.usage.completionTokens | Number of completion tokens |
ai.settings.maxRetries | Maximum retry attempts |
ai.telemetry.functionId | Function identifier |
ai.telemetry.metadata.* | Custom metadata |
Custom OpenTelemetry Setup
For custom OpenTelemetry configuration:
Best Practices
- Enable telemetry in production environments
- Use meaningful function IDs
- Include relevant metadata for debugging
- Monitor token usage and costs
Learn More
Have Questions?
We’re here to help! Choose the best way to reach us:
Join our Discord community for quick answers and discussions
Email us at hello@puzzlet.ai for support
Schedule an Enterprise Demo to learn about our business solutions
Was this page helpful?