The runInference function is the primary way to execute prompts and get responses from language models in AgentMark.

Basic Usage

TypeScript
import { runInference, load } from "@puzzlet/agentmark";

// Load prompt from file
const Prompt = await load('./example.prompt.mdx');

// Run inference with props
const result = await runInference(Prompt, {
  name: "Alice",
  items: ["apple", "banana"]
});

API Reference

runInference(prompt, props?, options?)

Executes a prompt with the given props and returns the model’s response.

Parameters

ParameterTypeDescriptionRequired
promptAgentMarkPromptThe prompt to execute (loaded from an .mdx file)Yes
propsRecord<string, any>Props to pass to the promptNo
optionsInferenceOptionsOptional configurationNo

InferenceOptions

PropertyTypeDescriptionRequired
apiKeystringOverride the API key for the requestNo
telemetryTelemetrySettingsTelemetry data configurationNo

TelemetrySettings

PropertyTypeDescriptionRequired
isEnabledbooleanWhether telemetry is enabledNo
functionIdstringIdentifier for the functionNo
metadataRecord<string, any>Additional metadataNo

Returns: AgentMarkOutput

PropertyTypeDescription
result.textstringText response from the model
result.objectRecord<string, any>Structured output if schema is used
toolsArray<{name: string, input: Record<string, any>, output?: Record<string, any>}>Tool calls made during inference
toolResponsesGenerateTextResult<any, never>['toolResults']Results from tool executions
usage.promptTokensnumberNumber of tokens in the prompt
usage.completionTokensnumberNumber of tokens in the completion
usage.totalTokensnumberTotal tokens used
finishReason"stop" | "length" | "content-filter" | "tool-calls" | "error" | "other" | "unknown"Why the model stopped generating

Examples

Basic Text Response

TypeScript
const response = await runInference(Prompt, {
  userName: "Alice"
});
console.log(response.result.text); // "Hello Alice!"

Structured Output

Note: To use structured output, you must define a schema in the prompt.

TypeScript
const response = await runInference(Prompt, {
  sentence: "Alice and Bob went to lunch"
});
console.log(response.result.object); // { names: ["Alice", "Bob"] }

With Custom Settings

TypeScript
await runInference(Prompt, props, {
  apiKey: "sk-1234567890",
  telemetry: {
    isEnabled: true,
    functionId: "example-function",
    metadata: {
      userId: "123"
    }
  }
});

Error Handling

The function throws errors for:

  • Invalid prompt format
  • Model API errors
  • Schema validation failures
TypeScript
try {
  const response = await runInference(Prompt);
} catch (error) {
  console.error("Inference failed:", error);
}

Best Practices

  1. Always handle potential errors
  2. Use appropriate types for props
  3. Configure telemetry appropriately for production use
  4. Keep API keys secure and use environment variables

Have Questions?

We’re here to help! Choose the best way to reach us: