API
Run Inference
Execute prompts and get responses from language models
The runInference
function is the primary way to execute prompts and get responses from language models in AgentMark.
Basic Usage
TypeScript
API Reference
runInference(prompt, props?, options?)
Executes a prompt with the given props and returns the model’s response.
Parameters
Parameter | Type | Description | Required |
---|---|---|---|
prompt | AgentMarkPrompt | The prompt to execute (loaded from an .mdx file) | Yes |
props | Record<string, any> | Props to pass to the prompt | No |
options | InferenceOptions | Optional configuration | No |
InferenceOptions
Property | Type | Description | Required |
---|---|---|---|
apiKey | string | Override the API key for the request | No |
telemetry | TelemetrySettings | Telemetry data configuration | No |
TelemetrySettings
Property | Type | Description | Required |
---|---|---|---|
isEnabled | boolean | Whether telemetry is enabled | No |
functionId | string | Identifier for the function | No |
metadata | Record<string, any> | Additional metadata | No |
Returns: AgentMarkOutput
Property | Type | Description |
---|---|---|
result.text | string | Text response from the model |
result.object | Record<string, any> | Structured output if schema is used |
tools | Array<{name: string, input: Record<string, any>, output?: Record<string, any>}> | Tool calls made during inference |
toolResponses | GenerateTextResult<any, never>['toolResults'] | Results from tool executions |
usage.promptTokens | number | Number of tokens in the prompt |
usage.completionTokens | number | Number of tokens in the completion |
usage.totalTokens | number | Total tokens used |
finishReason | "stop" | "length" | "content-filter" | "tool-calls" | "error" | "other" | "unknown" | Why the model stopped generating |
Examples
Basic Text Response
TypeScript
Structured Output
Note: To use structured output, you must define a
schema
in the prompt.
TypeScript
With Custom Settings
TypeScript
Error Handling
The function throws errors for:
- Invalid prompt format
- Model API errors
- Schema validation failures
TypeScript
Best Practices
- Always handle potential errors
- Use appropriate types for props
- Configure telemetry appropriately for production use
- Keep API keys secure and use environment variables
Have Questions?
We’re here to help! Choose the best way to reach us:
Join our Discord community for quick answers and discussions
Email us at hello@puzzlet.ai for support
Schedule an Enterprise Demo to learn about our business solutions
Was this page helpful?