Prompts must be loaded from a file before you can run on it. Once loaded, a prompt has just a few functions you can run on it:

  1. run - Execute the prompt and get a response from the model.
  2. stream - Stream the prompt and get a response from the model.
  3. compile - Compile the prompt into a JSON object, with props applied.
  4. deserialize - Deserialize the prompt into the appropriate provider’s format (i.e. OpenAI, Anthropic, Ollama, etc.)

Load Prompts

The load function is the primary way to load prompts from files in AgentMark. The FileLoader accepts a base path, and a runner function.

TypeScript
import { FileLoader, createTemplateRunner } from "@puzzlet/agentmark";
const fileLoader = new FileLoader('./path/to/prompts', createTemplateRunner);
const examplePrompt = await fileLoader.load('./example.prompt.mdx');

Run Inference

TypeScript
// Run inference with props
const result = await examplePrompt.run({
  name: "Alice",
  items: ["apple", "banana"]
});

run(props?, options?)

Executes a prompt with the given props and returns the model’s response.

Parameters

ParameterTypeDescriptionRequired
propsRecord<string, any>Props to pass to the promptNo
optionsInferenceOptionsOptional configurationNo

InferenceOptions

PropertyTypeDescriptionRequired
apiKeystringOverride the API key for the requestNo
telemetryTelemetrySettingsTelemetry data configurationNo

TelemetrySettings

PropertyTypeDescriptionRequired
isEnabledbooleanWhether telemetry is enabledNo
functionIdstringIdentifier for the functionNo
metadataRecord<string, any>Additional metadataNo

Returns: AgentMarkOutput

PropertyTypeDescription
resultanyThe result of the inference. This can be text, or an object if there’s a schema provided.
versionstringThe version of the prompt output. Used internally
toolsArray<{name: string, input: Record<string, any>, output?: Record<string, any>}>Tool calls made during inference
toolResponsesGenerateTextResult<any, never>['toolResults']Results from tool executions
usage.promptTokensnumberNumber of tokens in the prompt
usage.completionTokensnumberNumber of tokens in the completion
usage.totalTokensnumberTotal tokens used
finishReason"stop" | "length" | "content-filter" | "tool-calls" | "error" | "other" | "unknown"Why the model stopped generating

Examples

Basic Text Response

TypeScript
const response = await examplePrompt.run({
  userName: "Alice"
});
console.log(response.result); // "Hello Alice!"

Structured Output

Note: To use structured output, you must define a schema in the prompt.

TypeScript
const response = await examplePrompt.run({
  sentence: "Alice and Bob went to lunch"
});
console.log(response.result); // { names: ["Alice", "Bob"] }

With Custom Settings

TypeScript
await examplePrompt.run({}, {
  apiKey: "sk-1234567890",
  telemetry: {
    isEnabled: true,
    functionId: "example-function",
    metadata: {
      userId: "123"
    }
  }
});

Error Handling

The function throws errors for:

  • Invalid prompt format
  • Model API errors
  • Schema validation failures
TypeScript
try {
  const response = await examplePrompt.run();
} catch (error) {
  console.error("Inference failed:", error);
}

Compile

TypeScript
const compiledPrompt = await examplePrompt.compile({
  userName: "Alice"
});
console.log(compiledPrompt); // { name: "example-prompt", messages: [...], metadata: {...} }

compile(props?)

Compiles the prompt into a JSON object, with props applied.

Parameters

ParameterTypeDescriptionRequired
propsRecord<string, any>Props to pass to the promptNo

Returns: Record<string, any>

PropertyTypeDescription
namestringModel name from the prompt
messagesArray<{role: string, content: string}>Messages to send to the model
metadataRecord<string, any>Metadata from the prompt

Examples

TypeScript
const compiledPrompt = await examplePrompt.compile({
  userName: "Alice"
});
console.log(compiledPrompt); // { name: "example-prompt", messages: [...], metadata: {...} }

Deserialize

TypeScript
const result = await examplePrompt.deserialize({ name: "Alice" });

deserialize(props?)

Deserializes the prompt into the appropriate provider’s format (i.e. OpenAI, Anthropic, Ollama, etc.)

Parameters

ParameterTypeDescriptionRequired
propsRecord<string, any>Props to pass to the promptNo

Returns: CompletionParams

The deserialized prompt in the format expected by the provider.

Examples

TypeScript
const result = await examplePrompt.deserialize({ name: "Alice" });
console.log(result); 
// {
//   model: "gpt-4-turbo-preview",
//   temperature: 0.7,
//   messages: [
//     { role: "system", content: "You are a helpful assistant." },
//     { role: "user", content: "Hi, I'm Alice!" },
//   ]
// }

Best Practices

  1. Always handle potential errors
  2. Use appropriate types for props
  3. Configure telemetry appropriately for production use
  4. Keep API keys secure and use environment variables

Have Questions?

We’re here to help! Choose the best way to reach us: