Skip to content

Quickstart

Get up and running with Attest in 5 minutes.

Terminal window
uv add attest-ai

Or with pip:

Terminal window
pip install attest-ai
Terminal window
npm install @attest-ai/core

Or with pnpm:

Terminal window
pnpm add @attest-ai/core

Start with a simple agent that uses OpenAI:

from openai import OpenAI
client = OpenAI()
def my_agent(question: str) -> str:
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": question}]
)
return response.choices[0].message.content
import OpenAI from "openai";
const client = new OpenAI();
async function myAgent(question: string): Promise<string> {
const response = await client.chat.completions.create({
model: "gpt-4o-mini",
messages: [{ role: "user", content: question }]
});
return response.choices[0].message.content || "";
}

Test the agent with Attest assertions:

from attest import expect
# Run the agent
result = my_agent("What is 2 + 2?")
# Assert the output
(expect(result)
.output_contains("4")
.cost_under(0.05))
import { expect } from "@attest-ai/core";
const result = await myAgent("What is 2 + 2?");
expect(result)
.output_contains("4")
.cost_under(0.05);
Terminal window
python test_agent.py
Terminal window
node test_agent.js

If all assertions pass, you’ll see:

✓ All assertions passed

If an assertion fails, you’ll see details:

✗ Assertion failed: output_contains("goodbye")
Expected output to contain: goodbye
Actual output: hello world

Attest uses a fluent API where you chain assertions. Each assertion validates a different aspect:

LayerExampleWhat it checks
Schema.matches_schema({"type": "object"})Output structure
Constraints.cost_under(0.10), .latency_under(2000)Performance metrics
Trace.trace_contains_model("gpt-4o-mini")Execution path
Content.output_contains("hello")Text content
Embedding.semantically_similar_to("greeting")Semantic meaning
LLM Judge.passes_judge("Is this polite?")Domain-specific eval
Trace Tree.trace_tree_valid()Structure of full trace
Simulation.simulation_passes()Multi-agent scenario

Chain multiple assertions on one result:

(expect(result)
.output_contains("success")
.cost_under(0.05)
.latency_under(3000)
.passes_judge("Is the response helpful?"))

Continue testing after failures to see all issues:

from attest import expect, soft_fail
with soft_fail():
(expect(result)
.output_contains("hello") # May fail
.cost_under(0.01) # May fail
.passes_judge("...")) # Will still run

Use built-in adapters for framework-specific features:

from attest.adapters import langchain
# Agent built with LangChain
from langchain_openai import ChatOpenAI
from langchain.agents import create_react_agent
agent = create_react_agent(...)
result = agent.invoke({"input": "..."})
# Attest auto-captures trace
expect(result).output_contains("...").trace_contains_tool("google_search")

“API key not found”

Set your provider’s API key as an environment variable:

Terminal window
export OPENAI_API_KEY="sk-..."

“Assertion failed but I expected it to pass”

Check the actual output:

print(result.output) # See what the agent actually returned
print(result.cost) # Check cost and latency
print(result.trace) # Inspect execution trace

“Tests running slow”

Use local models with Ollama to reduce latency:

from attest.adapters import ollama
# Runs locally, no API calls
result = ollama_agent("What is 2 + 2?")

Check the API reference for complete method documentation and the migration guides if you’re upgrading from another framework.