Puzzlet uses OpenTelemetry for collecting telemetry data. This provides a vendor-agnostic way to collect distributed traces and metrics for your prompt executions.

Enabling Tracing

Enable tracing in your Puzzlet client:

import { Puzzlet } from "@puzzlet/sdk";
import { createTemplateRunner } from "@puzzlet/agentmark";
const puzzletClient = new Puzzlet({
  apiKey: process.env.PUZZLET_API_KEY!,
  appId: process.env.PUZZLET_APP_ID!
}, createTemplateRunner);

// Initialize tracing
const tracer = puzzletClient.initTracing();

const myPrompt = await puzzletClient.fetchPrompt("my-prompt.prompt.mdx");

// Run your inference
const result = await myPrompt.run(props, { 
  telemetry: { 
    isEnabled: true,
    functionId: "my-function",
    metadata: {
      userId: "123",
    }
  } 
});

// Shutdown tracer (only needed for short-running scripts, local testing)
await tracer.shutdown();

Grouping Traces

You can group traces by using the trace function. This will create a new trace with the same name as the function.

import { trace } from "@puzzlet/sdk";
...
trace('my-trace', async () => {
  // Your code here
});

You can create sub-groups by using the component function. This will create a new sub group within the parent trace.

import { component } from "@puzzlet/sdk";
...
trace('my-trace', async () => {
  component('my-component', async () => {
    // Your code here
  });
});

Collected Spans

Puzzlet records the following OpenTelemetry spans:

Span TypeDescriptionAttributes
ai.inferenceFull length of the inference calloperation.name, ai.operationId, ai.prompt, ai.response.text, ai.response.toolCalls, ai.response.finishReason
ai.toolCallIndividual tool executionsoperation.name, ai.operationId, ai.toolCall.name, ai.toolCall.args, ai.toolCall.result
ai.streamStreaming response dataai.response.msToFirstChunk, ai.response.msToFinish, ai.response.avgCompletionTokensPerSecond

Basic LLM Span Information

Each LLM span contains:

AttributeDescription
ai.model.idModel identifier
ai.model.providerModel provider name
ai.usage.promptTokensNumber of prompt tokens
ai.usage.completionTokensNumber of completion tokens
ai.settings.maxRetriesMaximum retry attempts
ai.telemetry.functionIdFunction identifier
ai.telemetry.metadata.*Custom metadata

Viewing Traces

Traces can be viewed in the Puzzlet dashboard under the “Traces” tab. Each trace shows:

  • Complete prompt execution timeline
  • Tool calls and their durations
  • Token usage and costs
  • Custom metadata and attributes
  • Error information (if any)

Best Practices

  1. Use meaningful function IDs for easy filtering
  2. Add relevant metadata for debugging context
  3. Monitor token usage and costs regularly
  4. Enable tracing in production environments
  5. Use the dashboard’s filtering capabilities to debug specific issues

Learn More

Have Questions?

We’re here to help! Choose the best way to reach us: