Quickstart
Get up and running with Attest in 5 minutes.
Installation
Section titled “Installation”Python
Section titled “Python”uv add attest-aiOr with pip:
pip install attest-aiNode.js
Section titled “Node.js”npm install @attest-ai/coreOr with pnpm:
pnpm add @attest-ai/coreWrite Your First Test
Section titled “Write Your First Test”Step 1: Create an Agent
Section titled “Step 1: Create an Agent”Start with a simple agent that uses OpenAI:
Python
Section titled “Python”from openai import OpenAI
client = OpenAI()
def my_agent(question: str) -> str: response = client.chat.completions.create( model="gpt-4o-mini", messages=[{"role": "user", "content": question}] ) return response.choices[0].message.contentTypeScript
Section titled “TypeScript”import OpenAI from "openai";
const client = new OpenAI();
async function myAgent(question: string): Promise<string> { const response = await client.chat.completions.create({ model: "gpt-4o-mini", messages: [{ role: "user", content: question }] }); return response.choices[0].message.content || "";}Step 2: Add Assertions
Section titled “Step 2: Add Assertions”Test the agent with Attest assertions:
Python
Section titled “Python”from attest import expect
# Run the agentresult = my_agent("What is 2 + 2?")
# Assert the output(expect(result) .output_contains("4") .cost_under(0.05))TypeScript
Section titled “TypeScript”import { expect } from "@attest-ai/core";
const result = await myAgent("What is 2 + 2?");
expect(result) .output_contains("4") .cost_under(0.05);Step 3: Run Tests
Section titled “Step 3: Run Tests”Python
Section titled “Python”python test_agent.pyTypeScript
Section titled “TypeScript”node test_agent.jsIf all assertions pass, you’ll see:
✓ All assertions passedIf an assertion fails, you’ll see details:
✗ Assertion failed: output_contains("goodbye") Expected output to contain: goodbye Actual output: hello worldUnderstanding Assertions
Section titled “Understanding Assertions”Attest uses a fluent API where you chain assertions. Each assertion validates a different aspect:
| Layer | Example | What it checks |
|---|---|---|
| Schema | .matches_schema({"type": "object"}) | Output structure |
| Constraints | .cost_under(0.10), .latency_under(2000) | Performance metrics |
| Trace | .trace_contains_model("gpt-4o-mini") | Execution path |
| Content | .output_contains("hello") | Text content |
| Embedding | .semantically_similar_to("greeting") | Semantic meaning |
| LLM Judge | .passes_judge("Is this polite?") | Domain-specific eval |
| Trace Tree | .trace_tree_valid() | Structure of full trace |
| Simulation | .simulation_passes() | Multi-agent scenario |
Next Steps
Section titled “Next Steps”- Python SDK Reference — Explore all assertion methods
- Adapters — Learn about provider integration
- Migration Guides — Upgrade from other frameworks
- Write an Adapter — Build a custom integration
Common Patterns
Section titled “Common Patterns”Multiple Assertions
Section titled “Multiple Assertions”Chain multiple assertions on one result:
(expect(result) .output_contains("success") .cost_under(0.05) .latency_under(3000) .passes_judge("Is the response helpful?"))Soft Failures
Section titled “Soft Failures”Continue testing after failures to see all issues:
from attest import expect, soft_fail
with soft_fail(): (expect(result) .output_contains("hello") # May fail .cost_under(0.01) # May fail .passes_judge("...")) # Will still runAdapter Integration
Section titled “Adapter Integration”Use built-in adapters for framework-specific features:
from attest.adapters import langchain
# Agent built with LangChainfrom langchain_openai import ChatOpenAIfrom langchain.agents import create_react_agent
agent = create_react_agent(...)result = agent.invoke({"input": "..."})
# Attest auto-captures traceexpect(result).output_contains("...").trace_contains_tool("google_search")Troubleshooting
Section titled “Troubleshooting”“API key not found”
Set your provider’s API key as an environment variable:
export OPENAI_API_KEY="sk-..."“Assertion failed but I expected it to pass”
Check the actual output:
print(result.output) # See what the agent actually returnedprint(result.cost) # Check cost and latencyprint(result.trace) # Inspect execution trace“Tests running slow”
Use local models with Ollama to reduce latency:
from attest.adapters import ollama
# Runs locally, no API callsresult = ollama_agent("What is 2 + 2?")Learn More
Section titled “Learn More”Check the API reference for complete method documentation and the migration guides if you’re upgrading from another framework.