What You'll Build in 10 Minutes

By the end of this guide, every AI inference in your application will be:

WITNESSED — Cryptographic proof the inference happened
ANCHORED — SWT3 Witness Anchor in the immutable ledger
CLEARED — Raw prompts/responses purged from the wire
AUDITABLE — Mapped to EU AI Act & NIST AI RMF

Zero data retention. At Clearing Level 1+, your prompts and responses never leave your infrastructure. The witness endpoint is a "Blind Registrar" — it stores cryptographic proofs, not data.
1

Install the SDK

30 seconds

Python
TypeScript
pip install swt3-ai
npm install @tenova/swt3-ai

Zero dependencies. The SDK uses only Python/Node.js standard library + your existing AI client.

2

Get Your API Key

1 minute

Log in to your Axiom Dashboard → Settings → API Keys and create a new key.

Your key starts with axm_live_. You'll also need your Tenant ID (shown on the Settings page).

Keep your key safe. It's shown once and cannot be recovered. Store it in an environment variable, never in code.
3

Wrap Your AI Client

3 minutes — this is the only code change

Python + OpenAI
Python + Anthropic
TypeScript + OpenAI
Vercel AI SDK
from swt3_ai import Witness
from openai import OpenAI

# Initialize the witness (once, at startup)
witness = Witness(
    endpoint="https://sovereign.tenova.io",
    api_key=os.environ["SWT3_API_KEY"],
    tenant_id=os.environ["SWT3_TENANT_ID"],
)

# Wrap your client — this is the ONLY change
client = witness.wrap(OpenAI())

# Use it exactly as before. Every inference is now witnessed.
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Summarize this contract"}],
)
print(response.choices[0].message.content)
# ^ Untouched. Zero latency added. Witnessing happens in the background.
from swt3_ai import Witness
from anthropic import Anthropic

witness = Witness(
    endpoint="https://sovereign.tenova.io",
    api_key=os.environ["SWT3_API_KEY"],
    tenant_id=os.environ["SWT3_TENANT_ID"],
)

client = witness.wrap(Anthropic())

message = client.messages.create(
    model="claude-sonnet-4-20250514",
    max_tokens=1024,
    messages=[{"role": "user", "content": "Draft a compliance memo"}],
)
import { Witness } from "@tenova/swt3-ai";
import OpenAI from "openai";

const witness = new Witness({
  endpoint: "https://sovereign.tenova.io",
  apiKey: process.env.SWT3_API_KEY,
  tenantId: process.env.SWT3_TENANT_ID,
});

const client = witness.wrap(new OpenAI()) as OpenAI;

// Works with streaming too
const stream = await client.chat.completions.create({
  model: "gpt-4o",
  messages: [{ role: "user", content: "Hello" }],
  stream: true,
});
for await (const chunk of stream) {
  process.stdout.write(chunk.choices[0]?.delta?.content ?? "");
}
import { Witness } from "@tenova/swt3-ai";
import { streamText } from "ai";
import { openai } from "@ai-sdk/openai";

const witness = new Witness({ /* ... */ });
const prompt = "Summarize this contract for the board";

const result = await streamText({
  model: openai("gpt-4o"),
  prompt,
  onFinish: witness.vercelOnFinish({ promptText: prompt }),
});
// Works with any Vercel AI SDK provider — OpenAI, Anthropic, Google, custom
That's it. No decorators. No middleware. No configuration files. Your AI client works exactly as before — the witness observes silently in the background.
4

See the Green Dot

Immediate — check your dashboard

Open your Axiom Dashboard → AI Witness page. You'll see:

Every anchor maps to a specific regulatory requirement. Your auditor can verify each one mathematically.

What Gets Witnessed Per Inference

Each inference automatically produces anchors for these procedures:

ProcedureWhat It ProvesRegulation
AI-INF.1Inference provenance (prompt + response hashed)EU AI Act Art. 12
AI-INF.2Latency within threshold (detects model swaps)NIST AI RMF MEASURE 2.6
AI-MDL.1Model hash matches approved versionEU AI Act Art. 9
AI-MDL.2Model version identifier recordedEU AI Act Art. 72
AI-GRD.2No content filter or refusal triggeredEU AI Act Art. 9

Clearing Levels — You Control What Leaves

The Clearing Engine controls what data travels on the wire to the witness endpoint. Your code always gets the full response.

LevelNameWhat's on the WireUse Case
0AnalyticsHashes + factors + model + providerInternal analytics
1StandardHashes + factors + modelProduction apps (default)
2SensitiveHashes + factors + model onlyHealthcare, legal, PII
3ClassifiedNumeric factors only, model hashedDefense, air-gapped
# Set clearing level at initialization
witness = Witness(
    endpoint="https://sovereign.tenova.io",
    api_key="axm_live_...",
    tenant_id="YOUR_ENCLAVE",
    clearing_level=2,  # Sensitive — no provider names on the wire
)
At Level 1+, raw prompts and responses never leave your infrastructure. The witness endpoint stores cryptographic proofs, not data. Your liability is near zero.

Sovereign Cloud — Any Model, Any Infrastructure

The SDK works with any OpenAI-compatible endpoint. Run Llama 3 on vLLM, Mistral on Ollama, or any model behind your own API:

# vLLM on your GPU cluster
client = witness.wrap(OpenAI(base_url="http://gpu-cluster:8000/v1"))

# Ollama (local development)
client = witness.wrap(OpenAI(base_url="http://localhost:11434/v1"))

# Same SWT3 anchors, same ledger, same audit trail
# Regardless of where the model runs
Start Witnessing →