Adapter System
Attest uses a two-tier adapter architecture to capture traces from both LLM provider SDKs and agent orchestration frameworks. All adapters produce the same canonical Trace format, making the assertion pipeline framework-agnostic.
Architecture
Section titled “Architecture”flowchart TB subgraph TIER1["Tier 1: Provider Adapters"] direction LR OAI["OpenAI<br/>Adapter"] ANT["Anthropic<br/>Adapter"] GEM["Gemini<br/>Adapter"] OLL["Ollama<br/>Adapter"] end
subgraph TIER2["Tier 2: Framework Adapters"] direction LR LC["LangChain<br/>Adapter"] ADK["Google ADK<br/>Adapter"] CR["CrewAI<br/>Adapter"] LI["LlamaIndex<br/>Adapter"] end
subgraph SPECIAL["Special Adapters"] direction LR MAN["Manual<br/>Adapter"] OTEL["OTel<br/>Adapter"] end
OAI --> CT["Canonical Trace"] ANT --> CT GEM --> CT OLL --> CT LC --> CT ADK --> CT CR --> CT LI --> CT MAN --> CT OTEL --> CT CT --> ENGINE["Attest Engine<br/>8-Layer Pipeline"]The two tiers solve fundamentally different capture problems:
| Provider Adapters (Tier 1) | Framework Adapters (Tier 2) | |
|---|---|---|
| Captures | Individual LLM calls, token counts, tool use blocks | Tool call sequences, agent delegation trees, multi-step reasoning |
| Does not capture | Agent orchestration, multi-step reasoning, parallel execution | Raw LLM request/response details |
| Complexity | ~100 lines per adapter | ~200-400 lines per adapter |
| Hook mechanism | Wraps LLM client SDK | Hooks into framework event system |
| Temporal data | Wall-clock around LLM call | Framework event timestamps |
| Agent identity | Not populated (single-agent context) | Populated from framework metadata |
Adapter Class Hierarchy
Section titled “Adapter Class Hierarchy”classDiagram class BaseAdapter { +str agent_id #_create_builder() TraceBuilder #_now_ms() int #_resolve_timestamps() tuple }
class BaseProviderAdapter { <<abstract>> +trace_from_response() Trace #_extract_completion()* str #_extract_model()* str #_extract_total_tokens()* int #_extract_tool_calls()* list #_extract_input() dict #_build_output() dict }
class OpenAIAdapter { +trace_from_response() Trace } class AnthropicAdapter { +trace_from_response() Trace } class GeminiAdapter { +trace_from_response() Trace } class OllamaAdapter { +trace_from_response() Trace }
class LangChainAdapter { +instrument() InstrumentedAgent +extract_trace() Trace } class GoogleADKAdapter { +instrument() InstrumentedAgent +extract_trace() Trace } class CrewAIAdapter { +instrument() InstrumentedAgent +extract_trace() Trace } class LlamaIndexAdapter { +instrument() InstrumentedAgent +extract_trace() Trace }
class ManualAdapter { +record_step() +record_delegation() +build_trace() Trace }
class OTelAdapter { +from_spans() Trace }
BaseAdapter <|-- BaseProviderAdapter BaseAdapter <|-- LangChainAdapter BaseAdapter <|-- GoogleADKAdapter BaseAdapter <|-- CrewAIAdapter BaseAdapter <|-- LlamaIndexAdapter BaseAdapter <|-- ManualAdapter BaseAdapter <|-- OTelAdapter BaseProviderAdapter <|-- OpenAIAdapter BaseProviderAdapter <|-- AnthropicAdapter BaseProviderAdapter <|-- GeminiAdapter BaseProviderAdapter <|-- OllamaAdapterTier 1: Provider Adapters
Section titled “Tier 1: Provider Adapters”Provider adapters wrap LLM client SDKs to capture individual API calls. They use the template method pattern — BaseProviderAdapter defines the algorithm skeleton in trace_from_response(), and each subclass overrides extraction methods for its provider’s response format.
BaseProviderAdapter
Section titled “BaseProviderAdapter”The base class handles:
- Timestamp resolution (wall-clock fallbacks)
- TraceBuilder setup with agent ID
- Input extraction
- LLM call step construction (model, completion text, token count)
- Tool call extraction
- Output and metadata assembly
Subclasses override four abstract methods:
| Method | Returns | Purpose |
|---|---|---|
_extract_completion(response) | str | Completion text from provider response |
_extract_model(response) | str | None | Model identifier |
_extract_total_tokens(response) | int | None | Total token count |
_extract_tool_calls(response) | list[dict] | Tool calls as [{"name": ..., "args": ...}] |
Two optional overrides customize input/output handling:
| Method | Default Behavior |
|---|---|
_extract_input(input_messages) | Wraps in {"messages": ...} |
_build_output(response, completion_text) | Returns {"message": completion_text} |
Usage Pattern
Section titled “Usage Pattern”from attest import OpenAIAdapterfrom openai import OpenAI
client = OpenAI()adapter = OpenAIAdapter()
# Capture timestamps around the API callimport timestarted = int(time.time() * 1000)response = client.chat.completions.create( model="gpt-4.1", messages=[{"role": "user", "content": "What is 2+2?"}])ended = int(time.time() * 1000)
# Build trace from responsetrace = adapter.trace_from_response( response, input_messages=[{"role": "user", "content": "What is 2+2?"}], started_at_ms=started, ended_at_ms=ended,)Available Provider Adapters
Section titled “Available Provider Adapters”| Adapter | Provider | Import |
|---|---|---|
OpenAIAdapter | OpenAI (and compatible: Azure, Together AI, vLLM) | from attest import OpenAIAdapter |
AnthropicAdapter | Anthropic Claude | from attest import AnthropicAdapter |
GeminiAdapter | Google Gemini | from attest import GeminiAdapter |
OllamaAdapter | Ollama (local models) | from attest import OllamaAdapter |
The OpenAIAdapter works with any OpenAI-compatible endpoint by configuring the client’s base_url.
Tier 2: Framework Adapters
Section titled “Tier 2: Framework Adapters”Framework adapters integrate with agent orchestration frameworks to capture the full execution structure. They hook into framework-specific event systems — callbacks, event streams, or workflow interceptors.
Interface
Section titled “Interface”Framework adapters implement instrument() to wrap an agent/executor and extract_trace() to produce the canonical trace:
class FrameworkAdapter: @property def framework_name(self) -> str: """Framework identifier: 'langchain', 'google-adk', etc.""" ...
@property def supports_parallel(self) -> bool: """Whether this framework can express parallel agent execution.""" return False
def instrument(self, agent_or_executor) -> InstrumentedAgent: """Wrap agent to capture traces while preserving its API.""" ...
def extract_trace(self) -> Trace: """Extract captured trace in Attest canonical format.""" ...The InstrumentedAgent wrapper delegates all attribute access to the underlying agent via __getattr__, so framework-specific methods continue to work.
Framework Capture Strategies
Section titled “Framework Capture Strategies”Each framework exposes different hook points:
LangChain
Section titled “LangChain”- Hook:
BaseCallbackHandlerinjected intoAgentExecutor.invoke(config={"callbacks": [handler]}) - Parallel support: No (
AgentExecutoris sequential) - Notes: “Agents-as-tools” pattern requires tracking callback nesting depth. Tool calls containing LLM calls unfold into
agent_callsteps with sub-traces.
from attest import LangChainAdapter
adapter = LangChainAdapter()instrumented = adapter.instrument(my_agent_executor)result = await instrumented.invoke({"input": "Handle this issue"})trace = adapter.extract_trace()Google ADK
Section titled “Google ADK”- Hook: Runner event stream (
session.run()yields events with agent boundaries) - Parallel support: Yes (
ParallelAgent) - Notes:
SequentialAgentproduces ordered steps.ParallelAgentproduces concurrent steps with overlapping temporal intervals.sub_agentsmap to trace tree nesting.
from attest import GoogleADKAdapter
adapter = GoogleADKAdapter()instrumented = adapter.instrument(my_adk_agent)result = await instrumented.run("Analyze this incident")trace = adapter.extract_trace()LlamaIndex
Section titled “LlamaIndex”- Hook:
AgentWorkflowevent listener for agent transitions - Parallel support: No (
AgentWorkflowmanages handoffs sequentially) - Notes: Workflow events provide agent transition boundaries. Each
FunctionAgentwithin the workflow becomes a trace tree node.
from attest import LlamaIndexInstrumentationHandler
handler = LlamaIndexInstrumentationHandler()# Attach to LlamaIndex instrumentation systemCrewAI
Section titled “CrewAI”- Hook: Task execution callbacks + agent step callbacks
- Parallel support: Partial (concurrent tasks within a crew)
- Notes: Crew maps to root trace. Each Agent becomes a sub-trace. Task delegation maps to
agent_callsteps.
from attest import CrewAIAdapter
adapter = CrewAIAdapter()instrumented = adapter.instrument(my_crew)result = await instrumented.kickoff()trace = adapter.extract_trace()Temporal Data Population
Section titled “Temporal Data Population”Framework adapters populate temporal fields that provider adapters do not:
| Field | Provider Adapter | Framework Adapter |
|---|---|---|
started_at_ms | Wall-clock around LLM call | Framework event timestamp |
ended_at_ms | Wall-clock around LLM call | Framework event timestamp |
agent_id | Not populated | From framework agent metadata |
agent_role | Not populated | From framework agent description |
These fields enable Layer 3 temporal assertions (agent_ordered_before, agents_overlap, ordered_agents) and Layer 7 trace tree analysis.
Special Adapters
Section titled “Special Adapters”Manual Adapter
Section titled “Manual Adapter”For custom agent implementations that do not use a supported framework. The developer manually records steps and builds the trace:
from attest import ManualAdapter
adapter = ManualAdapter(agent_id="my-agent")builder = adapter._create_builder()
builder.set_input(messages=[{"role": "user", "content": "Hello"}])builder.add_llm_call("completion", args={"model": "gpt-4.1"}, result={"completion": "Hi there"})builder.add_tool_call(name="search", args={"query": "weather"})builder.set_output(message="The weather is sunny.")
trace = builder.build()OTel Adapter
Section titled “OTel Adapter”Converts OpenTelemetry spans into Attest traces. Use this to test agents instrumented with any OTel-compatible observability platform (Langfuse, Arize, etc.):
from attest import OTelAdapter
adapter = OTelAdapter()trace = adapter.from_spans(otel_spans)The OTel adapter maps:
- Span
startTimeUnixNano/endTimeUnixNanotostarted_at_ms/ended_at_ms - Span attributes
gen_ai.agent.nametoagent_id - Span name to step name
Combining Adapters
Section titled “Combining Adapters”A framework adapter and a provider adapter can be used together. The framework adapter captures orchestration structure while the provider adapter captures detailed LLM call data:
# Framework adapter captures orchestration structurelc_adapter = LangChainAdapter()instrumented = lc_adapter.instrument(my_executor)
# Provider adapter captures detailed LLM call dataoai_adapter = OpenAIAdapter()wrapped_client = oai_adapter.wrap(my_openai_client)
# Execute with both adapters activeresult = await instrumented.run("Handle this issue")
# Merge: framework trace provides structure, provider trace provides LLM detailtrace = attest.merge_traces( lc_adapter.extract_trace(), oai_adapter.extract_trace())The merge_traces utility matches LLM call steps by temporal overlap and enriches framework-level steps with provider-level detail (full prompt text, token breakdown, model parameters).
Implementing a Custom Adapter
Section titled “Implementing a Custom Adapter”Provider Adapter
Section titled “Provider Adapter”To add support for a new LLM provider, subclass BaseProviderAdapter and implement four methods:
from attest.adapters._base import BaseProviderAdapter
class MyProviderAdapter(BaseProviderAdapter): def _extract_completion(self, response) -> str: return response.text
def _extract_model(self, response, **metadata) -> str | None: return response.model_id
def _extract_total_tokens(self, response) -> int | None: return response.usage.total if response.usage else None
def _extract_tool_calls(self, response) -> list[dict]: return [ {"name": tc.function.name, "args": tc.function.arguments} for tc in (response.tool_calls or []) ]Then use it:
adapter = MyProviderAdapter()trace = adapter.trace_from_response(response, input_messages=messages)Framework Adapter
Section titled “Framework Adapter”Framework adapters are more involved. The key steps:
- Subclass
BaseAdapter - Implement
instrument()to hook into the framework’s event system - Collect events into a
TraceBuilderduring execution - Implement
extract_trace()to return the built trace
The LangChain adapter serves as a reference implementation (~300 lines).
Adapter Inventory
Section titled “Adapter Inventory”| Adapter | Tier | Captures | Install Extra |
|---|---|---|---|
OpenAIAdapter | Provider | LLM calls, tokens, tool use | attest-ai[openai] |
AnthropicAdapter | Provider | LLM calls, tokens, tool use | attest-ai[anthropic] |
GeminiAdapter | Provider | LLM calls, tokens, tool use | attest-ai[gemini] |
OllamaAdapter | Provider | LLM calls, tokens, tool use | attest-ai[ollama] |
LangChainAdapter | Framework | Tools, agents, delegation | attest-ai[langchain] |
GoogleADKAdapter | Framework | Tools, agents, parallel groups | attest-ai[google-adk] |
CrewAIAdapter | Framework | Tasks, agents, delegation | attest-ai[crewai] |
LlamaIndexInstrumentationHandler | Framework | Workflows, agent transitions | attest-ai[llamaindex] |
ManualAdapter | Special | Developer-defined steps | (included in base) |
OTelAdapter | Special | OTel spans | attest-ai[otel] |