Puzzlet uses OpenTelemetry for collecting telemetry data, similar to AgentMark’s observability system. This provides a vendor-agnostic way to collect distributed traces and metrics for your prompt executions.

Enabling Tracing

Enable tracing in your Puzzlet client:

import { Puzzlet } from "@puzzlet/sdk";

const puzzletClient = new Puzzlet({
  apiKey: process.env.PUZZLET_API_KEY!,
  appId: process.env.PUZZLET_APP_ID!
});

// Initialize tracing
const tracer = puzzletClient.initTracing();

// Run your inference
const result = await runInference(Prompt, props, { 
  telemetry: { 
    isEnabled: true,
    functionId: "my-function",
    metadata: {
      userId: "123",
    }
  } 
});

// Shutdown tracer (only needed for short-running scripts, local testing)
await tracer.shutdown();

Collected Spans

Puzzlet records the following OpenTelemetry spans:

Span TypeDescriptionAttributes
ai.inferenceFull length of the inference calloperation.name, ai.operationId, ai.prompt, ai.response.text, ai.response.toolCalls, ai.response.finishReason
ai.toolCallIndividual tool executionsoperation.name, ai.operationId, ai.toolCall.name, ai.toolCall.args, ai.toolCall.result
ai.streamStreaming response dataai.response.msToFirstChunk, ai.response.msToFinish, ai.response.avgCompletionTokensPerSecond

Basic LLM Span Information

Each LLM span contains:

AttributeDescription
ai.model.idModel identifier
ai.model.providerModel provider name
ai.usage.promptTokensNumber of prompt tokens
ai.usage.completionTokensNumber of completion tokens
ai.settings.maxRetriesMaximum retry attempts
ai.telemetry.functionIdFunction identifier
ai.telemetry.metadata.*Custom metadata

Viewing Traces

Traces can be viewed in the Puzzlet dashboard under the “Traces” tab. Each trace shows:

  • Complete prompt execution timeline
  • Tool calls and their durations
  • Token usage and costs
  • Custom metadata and attributes
  • Error information (if any)

Best Practices

  1. Use meaningful function IDs for easy filtering
  2. Add relevant metadata for debugging context
  3. Monitor token usage and costs regularly
  4. Enable tracing in production environments
  5. Use the dashboard’s filtering capabilities to debug specific issues

Learn More

Have Questions?

We’re here to help! Choose the best way to reach us: