Skip to content

Framework Adapters

Attest adapters capture LLM interactions as Trace objects. Two adapter tiers exist, backed by a shared class hierarchy.

BaseAdapter <- shared utilities
├── BaseProviderAdapter <- template method for single LLM calls
│ ├── OpenAIAdapter
│ ├── AnthropicAdapter
│ ├── GeminiAdapter
│ └── OllamaAdapter
├── GoogleADKAdapter <- framework: ADK event stream
├── LangChainCallbackHandler <- framework: LangChain callbacks
├── LlamaIndexInstrumentationHandler <- framework: LlamaIndex dispatcher
├── OTelAdapter <- framework: OpenTelemetry spans
└── ManualAdapter <- utility: builder function

Provider adapters capture a single LLM request/response cycle. They inherit BaseProviderAdapter and override 4 extraction methods.

Framework adapters capture a complete agent run (multiple LLM calls, tool calls, sub-agent delegations). They inherit BaseAdapter for shared utilities but implement their own capture mechanics.

ProviderAdapter — one LLM call → one Trace
FrameworkAdapter — one agent run (N LLM calls, M tool calls) → one Trace
from openai import OpenAI
from attest import OpenAIAdapter, AgentResult, expect
client = OpenAI()
adapter = OpenAIAdapter(agent_id="assistant")
response = client.chat.completions.create(
model="gpt-4.1",
messages=[{"role": "user", "content": "Explain recursion"}],
)
trace = adapter.trace_from_response(
response,
input_messages=[{"role": "user", "content": "Explain recursion"}],
)
result = AgentResult(trace=trace)
expect(result).output_contains("recursion").cost_under(0.10)
from anthropic import Anthropic
from attest import AnthropicAdapter, AgentResult, expect
client = Anthropic()
adapter = AnthropicAdapter(agent_id="claude-agent")
response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[{"role": "user", "content": "Explain recursion"}],
)
trace = adapter.trace_from_response(
response,
input_messages=[{"role": "user", "content": "Explain recursion"}],
)
from attest import GeminiAdapter
adapter = GeminiAdapter(agent_id="gemini-agent")
trace = adapter.trace_from_response(
response,
input_messages=[{"role": "user", "content": "Explain recursion"}],
model="gemini-2.0-flash", # Gemini model passed via metadata
)
from attest import OllamaAdapter
adapter = OllamaAdapter(agent_id="local-model")
# Ollama returns a dict, not an object
trace = adapter.trace_from_response(
response_dict,
input_messages=[{"role": "user", "content": "Hello"}],
)
Terminal window
uv add 'attest-ai[langchain]'

Context manager usage:

from attest import LangChainAdapter, AgentResult, expect
adapter = LangChainAdapter(agent_id="research-agent")
with adapter.capture() as handler:
result = agent.invoke(
{"input": "Summarize recent AI research"},
config={"callbacks": [handler]},
)
trace = adapter.trace
expect(AgentResult(trace=trace)).tool_called("search_web")

Direct handler usage:

from attest import LangChainCallbackHandler
handler = LangChainCallbackHandler(agent_id="my-agent")
result = agent.invoke(input_data, config={"callbacks": [handler]})
trace = handler.build_trace()

build_trace() raises RuntimeError if called more than once.

Callback mapping:

CallbackCaptured as
on_chain_startAgent input
on_chain_endAgent output
on_chat_model_start / on_llm_endllm_call step with token counts
on_tool_start / on_tool_endtool_call step with args and result
on_tool_errortool_call step with error field
Terminal window
uv add 'attest-ai[google-adk]'

Async capture:

from attest import GoogleADKAdapter, AgentResult, expect
adapter = GoogleADKAdapter(agent_id="root-agent")
trace = await adapter.capture_async(
runner=runner,
user_id="user-123",
session_id="session-abc",
message="What is the weather in Paris?",
)
expect(AgentResult(trace=trace)).tool_called("get_weather")

From pre-collected events:

trace = GoogleADKAdapter.from_events(
events=collected_events,
agent_id="root-agent",
input_message="What is the weather in Paris?",
)

Event mapping:

ADK event fieldCaptured as
actions.tool_callstool_call steps (args)
actions.tool_resultstool_call steps (result)
actions.transfer_to_agentagent_call step
usage_metadata.total_token_countaccumulated token count
is_final_response() + content.parts[].textagent output
llm_response.model_versionmodel metadata (first non-None)
Terminal window
uv add 'attest-ai[llamaindex]'

Context manager usage:

from attest import LlamaIndexInstrumentationHandler, AgentResult, expect
with LlamaIndexInstrumentationHandler(agent_id="rag-agent") as handler:
response = query_engine.query("What is the capital of France?")
trace = handler.build_trace(
query="What is the capital of France?",
response=str(response),
)
expect(AgentResult(trace=trace)).output_contains("Paris")

Manual attach/detach:

handler = LlamaIndexInstrumentationHandler(agent_id="rag-agent")
handler.attach()
# ... run queries ...
handler.detach()
trace = handler.build_trace(query="...", response="...")

Event mapping:

LlamaIndex eventCaptured as
LLMChatStartEventBuffers model name
LLMChatEndEventllm_call step with tokens and tool calls
RetrievalStartEventBuffers query string
RetrievalEndEventretrieval step with nodes and scores
Terminal window
uv add 'attest-ai[otel]'

From collected spans:

from attest import OTelAdapter
trace = OTelAdapter.from_spans(
spans=collected_spans,
agent_id="my-agent",
)

Span classification:

Span attributeClassified as
gen_ai.operation.name == “chat” / “completion”llm_call step
gen_ai.operation.name == “tool” or gen_ai.tool.name presenttool_call step
Other spansSkipped

The adapter reads gen_ai.* semantic conventions for model, tokens, completion text, and tool parameters.

A framework adapter captures LLM calls internally. Use both tiers only when you need separate assertions at each level:

from attest import OpenAIAdapter, LangChainAdapter, AgentResult, expect
# Provider-level: assert on a raw OpenAI call
raw_trace = OpenAIAdapter(agent_id="raw-call").trace_from_response(
response,
input_messages=[{"role": "user", "content": "Hello"}],
)
expect(AgentResult(trace=raw_trace)).output_contains("Hello")
# Framework-level: assert on the full agent run
lc_adapter = LangChainAdapter(agent_id="orchestrated-run")
with lc_adapter.capture() as handler:
agent.invoke({"input": "Hello"}, config={"callbacks": [handler]})
expect(AgentResult(trace=lc_adapter.trace)).tool_called("lookup_user")

Inherit BaseProviderAdapter and implement 4 extraction methods. The base class handles timestamp resolution, builder creation, LLM step construction, and metadata assembly.

from __future__ import annotations
from typing import Any
from attest.adapters import BaseProviderAdapter
class MistralAdapter(BaseProviderAdapter):
"""Adapter for Mistral AI chat completions."""
def _extract_completion(self, response: Any) -> str:
return response.choices[0].message.content or ""
def _extract_model(self, response: Any, **metadata: Any) -> str | None:
return getattr(response, "model", None)
def _extract_total_tokens(self, response: Any) -> int | None:
if hasattr(response, "usage") and response.usage:
return response.usage.total_tokens
return None
def _extract_tool_calls(self, response: Any) -> list[dict[str, Any]]:
message = response.choices[0].message
if not hasattr(message, "tool_calls") or not message.tool_calls:
return []
return [
{"name": tc.function.name, "args": {"arguments": tc.function.arguments}}
for tc in message.tool_calls
]

Optional override — _build_output: Customize the trace output dict (default: {"message": completion_text}).

def _build_output(
self, response: Any, completion_text: str, **metadata: Any
) -> dict[str, Any]:
return {
"message": completion_text,
"structured": metadata.get("structured_output", {}),
}

Optional override — _extract_input: Customize input format (default: {"messages": input_messages}).

Inherit BaseAdapter for shared utilities. Framework adapters have diverse input mechanics, so there is no template method — implement your own capture method.

from __future__ import annotations
from typing import Any
from attest._proto.types import Trace
from attest.adapters import BaseAdapter
class CrewAIAdapter(BaseAdapter):
"""Adapter for CrewAI task execution."""
def trace_from_crew(self, crew_output: Any, **metadata: Any) -> Trace:
builder = self._create_builder()
total_tokens = 0
for task_output in crew_output.tasks_output:
agent_name = task_output.agent
completion = task_output.raw
builder.add_llm_call(
name=agent_name,
result={"completion": completion},
)
for tool_use in getattr(task_output, "tool_uses", []):
builder.add_tool_call(
name=tool_use.tool_name,
args=tool_use.args,
result=tool_use.result,
agent_id=agent_name,
)
total_tokens += getattr(tool_use, "tokens", 0)
builder.set_output(message=crew_output.raw)
builder.set_metadata(
total_tokens=total_tokens if total_tokens > 0 else None,
cost_usd=metadata.get("cost_usd"),
)
return builder.build()

Shared utilities from BaseAdapter:

MethodPurpose
self._create_builder()Returns a TraceBuilder pre-configured with agent_id
self._now_ms()Current wall-clock time in milliseconds
self._resolve_timestamps(started, ended)Fills None values with current time
from attest.adapters import BaseAdapter, BaseProviderAdapter
ClassUse caseAbstract methods
BaseAdapterFramework adapters, custom adaptersNone (concrete)
BaseProviderAdapterSingle LLM call adapters_extract_completion, _extract_model, _extract_total_tokens, _extract_tool_calls

The legacy TraceAdapter, ProviderAdapter, and FrameworkAdapter Protocols remain exported for backward compatibility. New adapters should use the class hierarchy.