Skip to content

TypeScript Setup

Install the core SDK and Vitest integration:

Terminal window
npm install @attest-ai/core @attest-ai/vitest

Or with pnpm:

Terminal window
pnpm add @attest-ai/core @attest-ai/vitest

Use the Agent class to wrap your agent logic:

import { Agent, TraceBuilder } from "@attest-ai/core";
const supportAgent = new Agent("support-agent", (builder: TraceBuilder, args) => {
builder.addToolCall("lookup_user", { args: { query: args.user_message }, result: { user_id: "U-123" } });
builder.addToolCall("reset_password", { args: { user_id: "U-123" }, result: { success: true } });
builder.setMetadata({ total_tokens: 150, cost_usd: 0.005, latency_ms: 1200 });
return { message: "Your temporary password is abc123." };
});

Create a test file test_agent.ts:

import { attestExpect } from "@attest-ai/core";
const result = supportAgent.run({ user_message: "Reset my password" });
attestExpect(result)
.outputContains("temporary password")
.costUnder(0.05)
.toolsCalledInOrder(["lookup_user", "reset_password"]);

Use @attest-ai/vitest for test runner integration:

import { describe, it } from "vitest";
import { attestExpect } from "@attest-ai/vitest";
describe("support agent", () => {
it("handles password reset", async () => {
const result = supportAgent.run({ user_message: "Reset my password" });
attestExpect(result)
.outputContains("temporary password")
.costUnder(0.05)
.toolsCalledInOrder(["lookup_user", "reset_password"])
.passesJudge("Was the password reset handled correctly?");
});
});

Run with:

Terminal window
npx vitest

Build traces manually for custom agent architectures:

import { TraceBuilder, AgentResult, attestExpect } from "@attest-ai/core";
const builder = new TraceBuilder("my-agent");
builder.setInput({ query: "What is 2 + 2?" });
builder.addLlmCall("gpt-4.1", { args: { model: "gpt-4.1" }, result: { content: "4" } });
builder.setOutput({ message: "The answer is 4." });
builder.setMetadata({ total_tokens: 50, cost_usd: 0.001, latency_ms: 800 });
const trace = builder.build();
const result = new AgentResult(trace);
attestExpect(result)
.outputContains("4")
.costUnder(0.05);
VariablePurpose
OPENAI_API_KEYOpenAI API key for judge/embedding assertions
ATTEST_ENGINE_PATHOverride engine binary location
ATTEST_SIMULATIONEnable simulation mode (no real LLM calls)
ATTEST_JUDGE_PROVIDERLLM judge provider: openai, anthropic, gemini, ollama
ATTEST_JUDGE_MODELModel for judge assertions (e.g., gpt-4.1)