AgentMark uses a multi-stage transformation pipeline to convert your .prompt.mdx files into model-specific formats while maintaining a unified interface.

Processing Pipeline

1. MDX Processing

Your .prompt.mdx file contains:

  • Frontmatter configuration
  • Message components (<System>, <User>, <Assistant>)
  • Dynamic content (props, components)

2. AST Generation

The load() function parses your MDX into an Abstract Syntax Tree (AST), which:

  • Validates syntax
  • Processes imports
  • Provides an AST that the SDK can use to generate prompts

3. Transformations

Two transformations occur:

3a. AgentMark Format

runInference() converts the AST into AgentMark’s unified format:

  • Standardized message structure
  • Normalized model settings
  • Evaluates props

3b. Model Plugin Translation

Model plugins (like OpenAI, Anthropic) convert AgentMark format into provider-specific formats:

  • Message formatting
  • Parameter mapping
  • Tool/function calling adaptations

4. Unified Response

The model response is converted back into AgentMark’s unified format:

  • Consistent structure across models
  • Standardized error handling
  • Unified streaming interface

Implementation Example

TypeScript
import { runInference, load, ModelPluginRegistry } from "@puzzlet/agentmark";
import OpenAIChatPlugin from "@puzzlet/openai";

// Register model plugin
ModelPluginRegistry.register(new OpenAIChatPlugin(), ["gpt-4"]);

// Load and process prompt
const prompt = await load("./example.prompt.mdx");
const result = await runInference(prompt, { name: "Alice" });

Key Benefits

  1. Abstraction: Developers work with a single, unified format
  2. Portability: Prompts can be easily switched between models
  3. Extensibility: New model support via plugin system
  4. Type Safety: Full type-safety support throughout pipeline
  5. Tooling: We can support a rich ecosystem of development tools

Have Questions?

We’re here to help! Choose the best way to reach us: