Python Setup
Installation
Section titled “Installation”Install the attest-ai package:
uv add attest-aiOr with pip:
pip install attest-aiFor framework-specific adapters, install extras:
uv add 'attest-ai[langchain]' # LangChain supportuv add 'attest-ai[llamaindex]' # LlamaIndex supportuv add 'attest-ai[google-adk]' # Google ADK supportuv add 'attest-ai[otel]' # OpenTelemetry supportCreate an Agent
Section titled “Create an Agent”Start with a simple agent that uses OpenAI:
from openai import OpenAI
client = OpenAI()
def my_agent(question: str) -> str: response = client.chat.completions.create( model="gpt-4o-mini", messages=[{"role": "user", "content": question}] ) return response.choices[0].message.contentWrite Your First Test
Section titled “Write Your First Test”Create a file test_agent.py:
from attest import expect
# Run the agentresult = my_agent("What is 2 + 2?")
# Assert the output(expect(result) .output_contains("4") .cost_under(0.05))Chain multiple assertions on one result:
(expect(result) .output_contains("success") .cost_under(0.05) .latency_under(3000) .passes_judge("Is the response helpful?"))Run Tests
Section titled “Run Tests”python test_agent.pyIf all assertions pass:
✓ All assertions passedIf an assertion fails:
✗ Assertion failed: output_contains("goodbye") Expected output to contain: goodbye Actual output: hello worldWith pytest
Section titled “With pytest”Attest includes a pytest plugin. Use the attest fixture:
import pytestfrom attest import expect
def test_math_question(attest): result = my_agent("What is 2 + 2?")
chain = ( expect(result) .output_contains("4") .cost_under(0.05) ) attest.evaluate(chain)Run with:
pytest test_agent.py -vSoft Failures
Section titled “Soft Failures”Continue testing after failures to see all issues:
from attest import expect, soft_fail
with soft_fail(): (expect(result) .output_contains("hello") # May fail .cost_under(0.01) # May fail .passes_judge("...")) # Will still runUsing Adapters
Section titled “Using Adapters”Use built-in adapters for framework-specific trace capture:
from attest import OpenAIAdapter, AgentResult, expect
client = OpenAI()adapter = OpenAIAdapter(agent_id="assistant")
response = client.chat.completions.create( model="gpt-4.1", messages=[{"role": "user", "content": "Explain recursion"}],)
trace = adapter.trace_from_response( response, input_messages=[{"role": "user", "content": "Explain recursion"}],)
result = AgentResult(trace=trace)expect(result).output_contains("recursion").cost_under(0.10)Simulation Mode
Section titled “Simulation Mode”Run tests without real API calls:
ATTEST_SIMULATION=1 pytest test_agent.py -vEnvironment Variables
Section titled “Environment Variables”| Variable | Purpose |
|---|---|
OPENAI_API_KEY | OpenAI API key for judge/embedding assertions |
ANTHROPIC_API_KEY | Anthropic API key for judge assertions |
ATTEST_ENGINE_PATH | Override engine binary location |
ATTEST_SIMULATION | Enable simulation mode (no real LLM calls) |
ATTEST_JUDGE_PROVIDER | LLM judge provider: openai, anthropic, gemini, ollama |
ATTEST_JUDGE_MODEL | Model for judge assertions (e.g., gpt-4.1) |
Next Steps
Section titled “Next Steps”- Expect DSL Reference — All assertion methods
- Adapters Reference — Provider integrations
- Simulation Guide — Simulation runtime details
- Multi-Agent Guide — Testing multi-agent systems