Core API
Execute prompts and get responses from language models
Prompts must be loaded from a file before you can run on it. Once loaded, a prompt has just a few functions you can run on it:
run
- Execute the prompt and get a response from the model.stream
- Stream the prompt and get a response from the model.compile
- Compile the prompt into a JSON object, with props applied.deserialize
- Deserialize the prompt into the appropriate provider’s format (i.e. OpenAI, Anthropic, Ollama, etc.)
Load Prompts
The load
function is the primary way to load prompts from files in AgentMark. The FileLoader accepts a base path, and a runner function.
Run Inference
run(props?, options?)
Executes a prompt with the given props and returns the model’s response.
Parameters
Parameter | Type | Description | Required |
---|---|---|---|
props | Record<string, any> | Props to pass to the prompt | No |
options | InferenceOptions | Optional configuration | No |
InferenceOptions
Property | Type | Description | Required |
---|---|---|---|
apiKey | string | Override the API key for the request | No |
telemetry | TelemetrySettings | Telemetry data configuration | No |
TelemetrySettings
Property | Type | Description | Required |
---|---|---|---|
isEnabled | boolean | Whether telemetry is enabled | No |
functionId | string | Identifier for the function | No |
metadata | Record<string, any> | Additional metadata | No |
Returns: AgentMarkOutput
Property | Type | Description |
---|---|---|
result | any | The result of the inference. This can be text, or an object if there’s a schema provided. |
version | string | The version of the prompt output. Used internally |
tools | Array<{name: string, input: Record<string, any>, output?: Record<string, any>}> | Tool calls made during inference |
toolResponses | GenerateTextResult<any, never>['toolResults'] | Results from tool executions |
usage.promptTokens | number | Number of tokens in the prompt |
usage.completionTokens | number | Number of tokens in the completion |
usage.totalTokens | number | Total tokens used |
finishReason | "stop" | "length" | "content-filter" | "tool-calls" | "error" | "other" | "unknown" | Why the model stopped generating |
Examples
Basic Text Response
Structured Output
Note: To use structured output, you must define a
schema
in the prompt.
With Custom Settings
Error Handling
The function throws errors for:
- Invalid prompt format
- Model API errors
- Schema validation failures
Compile
compile(props?)
Compiles the prompt into a JSON object, with props applied.
Parameters
Parameter | Type | Description | Required |
---|---|---|---|
props | Record<string, any> | Props to pass to the prompt | No |
Returns: Record<string, any>
Property | Type | Description |
---|---|---|
name | string | Model name from the prompt |
messages | Array<{role: string, content: string}> | Messages to send to the model |
metadata | Record<string, any> | Metadata from the prompt |
Examples
Deserialize
deserialize(props?)
Deserializes the prompt into the appropriate provider’s format (i.e. OpenAI, Anthropic, Ollama, etc.)
Parameters
Parameter | Type | Description | Required |
---|---|---|---|
props | Record<string, any> | Props to pass to the prompt | No |
Returns: CompletionParams
The deserialized prompt in the format expected by the provider.
Examples
Best Practices
- Always handle potential errors
- Use appropriate types for props
- Configure telemetry appropriately for production use
- Keep API keys secure and use environment variables
Have Questions?
We’re here to help! Choose the best way to reach us:
Join our Discord community for quick answers and discussions
Email us at hello@puzzlet.ai for support
Schedule an Enterprise Demo to learn about our business solutions
Was this page helpful?