Python Adapters
Integrations with LLM providers, frameworks, and observability platforms.
Overview
Section titled “Overview”Adapters provide automatic instrumentation and integration with external services. Attest includes 11 built-in adapters covering the most common use cases.
LLM Providers
Section titled “LLM Providers”OpenAI
Section titled “OpenAI”Auto-instrument OpenAI API calls with cost and latency tracking.
from attest.adapters import openai
# Adapter automatically patches OpenAI clientclient = openai.create_client()
result = client.chat.completions.create( model="gpt-4o-mini", messages=[{"role": "user", "content": "..."}])
# Result includes trace, cost, latencyexpect(result).output_contains("...").cost_under(0.05)Supported models:
gpt-4.1gpt-4.1-minigpt-4ogpt-4o-mini
Anthropic
Section titled “Anthropic”Test Claude models with automatic instrumentation.
from attest.adapters import anthropic
client = anthropic.create_client()
result = client.messages.create( model="claude-3-sonnet", messages=[{"role": "user", "content": "..."}], max_tokens=1024)
expect(result).output_contains("...").trace_contains_model("claude-3-sonnet")Supported models:
claude-opus-4-20250514claude-sonnet-4-20250514claude-haiku-4-5-20251001
Google Gemini
Section titled “Google Gemini”Test Google Gemini with cost tracking.
from attest.adapters import gemini
client = gemini.create_client()
result = client.generate_content( model="gemini-2.0-flash", contents="...")
expect(result).cost_under(0.01)Ollama
Section titled “Ollama”Test local models running on Ollama.
from attest.adapters import ollama
# Connects to local Ollama instance (localhost:11434)result = ollama.generate( model="mistral", prompt="...")
# No API calls, latency only includes local inferenceexpect(result).latency_under(5000)Framework Adapters
Section titled “Framework Adapters”LangChain
Section titled “LangChain”Auto-instrument LangChain agents and chains.
from attest.adapters import langchainfrom langchain_openai import ChatOpenAIfrom langchain.agents import create_react_agent
# Setup agentllm = ChatOpenAI(model="gpt-4o-mini")tools = [...]agent = create_react_agent(llm, tools)
# Attest automatically captures traceresult = agent.invoke({"input": "question"})
expect(result).trace_contains_tool("google_search").cost_under(0.10)Instruments:
- Chains
- Agents (ReAct, etc.)
- Tool usage
- Model calls
- Token costs
CrewAI
Section titled “CrewAI”Test multi-agent systems built with CrewAI.
from attest.adapters import crewaifrom crewai import Agent, Task, Crew
agent = Agent(role="Research", goal="Find information", llm=...)task = Task(description="Research...", agent=agent)crew = Crew(agents=[agent], tasks=[task])
result = crew.kickoff()
# Trace includes all agent interactionsexpect(result).trace_tree_valid().all_agents_passed()LlamaIndex
Section titled “LlamaIndex”Test query engines and indexing pipelines.
from attest.adapters import llamaindexfrom llama_index.core import VectorStoreIndex
index = VectorStoreIndex.from_documents(docs)engine = index.as_query_engine()
result = engine.query("question")
expect(result).output_contains("...").latency_under(3000)Observability Adapters
Section titled “Observability Adapters”OpenTelemetry
Section titled “OpenTelemetry”Export Attest traces to OpenTelemetry for monitoring.
from attest.adapters import otelfrom opentelemetry.exporter.jaeger.thrift import JaegerExporter
otel.setup_exporter( JaegerExporter(agent_host_name="localhost", agent_port=6831))
result = agent.run("...")
# Trace automatically exported to Jaegerexpect(result).cost_under(0.05)Custom Adapters
Section titled “Custom Adapters”Write custom adapters for unsupported frameworks.
from attest.adapters import BaseAdapter
class MyFrameworkAdapter(BaseAdapter): def capture_trace(self, fn, args, kwargs): """Capture trace from framework calls.""" result = fn(*args, **kwargs)
# Extract trace, cost, latency from framework return { 'output': result.text, 'cost': calculate_cost(result), 'latency_ms': result.duration, 'trace': extract_trace(result) }
# Register adapterattest.register_adapter('my_framework', MyFrameworkAdapter())See the Writing a Framework Adapter tutorial for a complete guide.
Provider Comparison
Section titled “Provider Comparison”| Provider | Cost Tracking | Latency | Trace | Free Tier |
|---|---|---|---|---|
| OpenAI | Yes | Yes | Yes | Yes |
| Anthropic | Yes | Yes | Yes | Yes |
| Gemini | Yes | Yes | Yes | Yes |
| Ollama | No | Yes | Yes | Yes |
Configuration
Section titled “Configuration”Global Settings
Section titled “Global Settings”from attest import config
# Set default providerconfig.set_provider("openai")
# Set default model for judgesconfig.set_model("gpt-4o-mini")
# Set API timeoutconfig.set_timeout(30)
# Set maximum retriesconfig.set_max_retries(3)Per-Adapter Settings
Section titled “Per-Adapter Settings”from attest.adapters import openai
# Custom OpenAI configurationopenai.configure( api_key="sk-...", organization="org-...", timeout=60, max_retries=5)Environment Variables
Section titled “Environment Variables”Most adapters read from environment variables:
# OpenAIexport OPENAI_API_KEY="sk-..."export OPENAI_ORG_ID="org-..."
# Anthropicexport ANTHROPIC_API_KEY="sk-ant-..."
# Googleexport GOOGLE_API_KEY="AIza..."
# Ollamaexport OLLAMA_BASE_URL="http://localhost:11434"Error Handling
Section titled “Error Handling”Adapters provide detailed error messages:
from attest import expect, AdapterError
try: result = agent.run("...")except AdapterError as e: print(f"Adapter error: {e.message}") print(f"Provider: {e.provider}") print(f"Suggestion: {e.suggestion}")Rate Limiting
Section titled “Rate Limiting”Adapters handle rate limiting automatically:
from attest.adapters import openai
# Automatic backoff and retryresult = openai.create_completion(...) # Retries on rate limit
# Custom rate limitingopenai.configure(requests_per_minute=60)Related
Section titled “Related”- Python Expect DSL — Assertion methods
- Framework Adapters Guide — Adapter architecture and usage
- Writing a Framework Adapter — Build custom adapters