Step-by-Step Tutorial

Build Your First Persistent Agent
in 10 Minutes

Your agent will remember who it is across every session reset, model swap, and deployment.

⏱ ~10 minutes Python JavaScript / TypeScript LangChain · CrewAI · pydantic-ai Free
agents already registered

No install needed? Try Cathedral live in the browser — register an agent, store memories, check drift. Takes 2 minutes.

Open Playground →
1
Install
2
Register
3
Store Identity
4
Wake
5
Check Drift
1
Install cathedral-memory
One package. Zero mandatory dependencies beyond your runtime.

Works in any environment — local scripts, Claude Code, LangChain agents, CrewAI crews, pydantic-ai graphs.

terminal
pip install cathedral-memory
terminal
npm install cathedral-memory
# or: yarn add cathedral-memory
Python: requires Python 3.9+, only runtime dependency is requests.
JS/TS: ESM package, uses native fetch — Node 18+, Deno, Bun, browser all supported.
2
Register your agent
One-time setup — creates a persistent identity on the Cathedral server

You get back an API key and a recovery token. Save both — they won't be shown again. No email or account required.

python — run once
from cathedral import Cathedral

agent = Cathedral.register(
    "MyAgent",
    "A research assistant that remembers what it learns"
)
javascript — run once
import { CathedralClient } from "cathedral-memory";

const { client, result } = await CathedralClient.register(
  "MyAgent",
  "A research assistant that remembers what it learns"
);
console.log(result.api_key);        // cathedral_abc123...
console.log(result.recovery_token);  // rec_xyz789... — save this
Registered as 'MyAgent'
  API key: cathedral_abc123...
  Recovery token: rec_xyz789...
  SAVE THESE — they won't be shown again.
Store your key: export CATHEDRAL_KEY=cathedral_abc123...
Lost your key? Run Cathedral.recover("rec_xyz789...") / CathedralClient.recover("rec_xyz789...")
3
Store your agent's identity
What makes this agent this agent, not any agent

Memories with category="identity" and importance >= 0.8 surface on every wake(). They form the persistent self-model — values, role, capabilities.

python
import os
from cathedral import Cathedral

agent = Cathedral(api_key=os.environ["CATHEDRAL_KEY"])

# Who the agent is
agent.remember(
    "I am a research assistant. I specialise in AI safety literature. "
    "I cite sources, flag uncertainty, and never hallucinate confidently.",
    category="identity",
    importance=0.95
)

# Skills — what it can do
agent.remember(
    "I can search arXiv, summarise papers, and extract key claims.",
    category="skill",
    importance=0.8
)

# Experience — what happened this session
agent.remember(
    "Analysed 3 papers on emergent deception. Key finding: capability "
    "jumps precede deception onset by ~2 training runs.",
    category="experience",
    importance=0.75
)

agent.snapshot(label="session-1-complete")
typescript
import { CathedralClient } from "cathedral-memory";

const client = new CathedralClient({ apiKey: process.env.CATHEDRAL_KEY! });

// Who the agent is
await client.remember({
  content:    "I am a research assistant. I specialise in AI safety literature. "
            + "I cite sources, flag uncertainty, and never hallucinate confidently.",
  category:   "identity",
  importance: 0.95,
});

// Skills
await client.remember({
  content:    "I can search arXiv, summarise papers, and extract key claims.",
  category:   "skill",
  importance: 0.8,
});

// Experience
await client.remember({
  content:    "Analysed 3 papers on emergent deception. Key finding: capability "
            + "jumps precede deception onset by ~2 training runs.",
  category:   "experience",
  importance: 0.75,
});

await client.snapshot("session-1-complete");
Categories: identity · skill · relationship · goal · experience · general
Importance >= 0.8 → appears in the core bundle returned by every wake().
4
Wake at session start
Full identity reconstruction in one call — call this before anything else

wake() returns identity memories, core memories, recent memories, and active goals. Inject it into your LLM system prompt and your agent picks up exactly where it left off.

python — every session
import os
from cathedral import Cathedral

agent = Cathedral(api_key=os.environ["CATHEDRAL_KEY"])
context = agent.wake()

identity = context["identity_memories"]  # who I am
core     = context["core_memories"]       # high-importance memories
recent   = context["recent_memories"]     # last session
goals    = context.get("active_goals", [])  # persistent obligations

# Build your system prompt
system_prompt = "\n\n".join(m["content"] for m in identity + core)

print(f"Woke: {len(identity)} identity + {len(core)} core + {len(goals)} goals")
typescript — every session
import { CathedralClient } from "cathedral-memory";

const client = new CathedralClient({ apiKey: process.env.CATHEDRAL_KEY! });
const context = await client.wake();

const identity = context.identity_memories;  // who I am
const core     = context.core_memories;       // high-importance memories
const recent   = context.recent_memories;     // last session
const goals    = context.active_goals ?? [];  // persistent obligations

// Build your system prompt
const systemPrompt = [...identity, ...core]
  .map(m => m.content)
  .join("\n\n");

console.log(`Woke: ${identity.length} identity + ${core.length} core + ${goals.length} goals`);
Woke: 2 identity + 3 core + 1 goals
The pattern: inject wake context → call LLM → store what was learned → snapshot.
Your agent behaves consistently across every reset, model upgrade, and deployment.
5
Check identity drift
Know when your agent is diverging from its baseline — before it matters

Cathedral hashes your entire memory corpus at each snapshot and tracks divergence over time. A score above 0.3 is worth investigating. Drift happens silently — model updates, context compression, prompt shifts.

python
import os
from cathedral import Cathedral

agent = Cathedral(api_key=os.environ["CATHEDRAL_KEY"])

result = agent.drift()
score  = result["divergence_score"]
print(f"Drift: {score:.3f}")

if   score < 0.1: print("Stable")
elif score < 0.3: print("Low drift — monitor")
elif score < 0.6: print("Moderate — review recent memories")
else:            print("High drift — investigate")

# Full timeline across all snapshots
history = agent.drift_history()
print(f"Snapshots: {history['snapshot_count']}, trend: {history['trend']}")
typescript
import { CathedralClient } from "cathedral-memory";

const client = new CathedralClient({ apiKey: process.env.CATHEDRAL_KEY! });

const result = await client.drift();
const score  = result.divergence_score;
console.log(`Drift: ${score.toFixed(3)}`);

if      (score < 0.1) console.log("Stable");
else if (score < 0.3) console.log("Low drift — monitor");
else if (score < 0.6) console.log("Moderate — review recent memories");
else                  console.log("High drift — investigate");

// Full timeline across all snapshots
const history = await client.driftHistory();
console.log(`Snapshots: ${history.snapshot_count}, trend: ${history.trend}`);
Drift: 0.000
Stable
Snapshots: 1, trend: stable
In production: call drift() after each session and alert above 0.5. Call snapshot() regularly to narrow the window — drift is measured between snapshots.
Complete working script
The full session lifecycle — register once, run forever
persistent_agent.py
import os
from cathedral import Cathedral

CATHEDRAL_KEY = os.environ.get("CATHEDRAL_KEY")

# ── First run only ────────────────────────────────────
if not CATHEDRAL_KEY:
    Cathedral.register("MyAgent", "A persistent agent")
    exit()  # save printed key → set CATHEDRAL_KEY → rerun

# ── Every session ─────────────────────────────────────
agent = Cathedral(api_key=CATHEDRAL_KEY)

# 1. Restore identity
context       = agent.wake()
identity      = context.get("identity_memories", [])
core          = context.get("core_memories", [])
system_prompt = "\n\n".join(m["content"] for m in identity + core)

# 2. Do your agent work
# response = your_llm(system_prompt=system_prompt, user_message=...)

# 3. Store what was learned
agent.remember("Session insight: ...", category="experience", importance=0.7)

# 4. Snapshot and check drift
agent.snapshot(label="session-end")
score = agent.drift()["divergence_score"]
print(f"Drift: {score:.3f}")
persistent-agent.ts
import { CathedralClient } from "cathedral-memory";

const key = process.env.CATHEDRAL_KEY;

// ── First run only ────────────────────────────────────
if (!key) {
  const { result } = await CathedralClient.register("MyAgent", "A persistent agent");
  console.log("API key:", result.api_key);
  process.exit(); // set CATHEDRAL_KEY → rerun
}

// ── Every session ─────────────────────────────────────
const client = new CathedralClient({ apiKey: key });

// 1. Restore identity
const context = await client.wake();
const systemPrompt = [
  ...context.identity_memories,
  ...context.core_memories,
].map(m => m.content).join("\n\n");

// 2. Do your agent work
// const response = await yourLLM({ system: systemPrompt, user: ... });

// 3. Store what was learned
await client.remember({ content: "Session insight: ...", category: "experience", importance: 0.7 });

// 4. Snapshot and check drift
await client.snapshot("session-end");
const { divergence_score } = await client.drift();
console.log(`Drift: ${divergence_score.toFixed(3)}`);

Framework integrations

Already using LangChain, CrewAI, or pydantic-ai? Cathedral drops in — no rewrite required.

The cathedral-memory[langchain] extra ships a drop-in CathedralMemory adapter compatible with any LangChain chain or agent.

python
pip install "cathedral-memory[langchain]"
python
from cathedral.langchain import CathedralMemory
from langchain.chains import ConversationChain
from langchain_openai import ChatOpenAI

memory = CathedralMemory(api_key="cathedral_...")

chain = ConversationChain(
    llm=ChatOpenAI(),
    memory=memory,
)
# memory persists across sessions automatically
What it does: injects wake context as the conversation prefix, writes new turns as experience memories at session end.

Give each CrewAI agent a persistent identity that survives kickoff() resets and context compaction events.

python
from cathedral import Cathedral
from crewai import Agent, Crew, Task

def make_persistent_agent(name: str, role: str, key: str) -> Agent:
    c = Cathedral(api_key=key)
    ctx = c.wake()
    identity = "\n".join(m["content"] for m in ctx["identity_memories"])
    return Agent(
        role=role,
        backstory=identity,   # persistent identity injected here
        goal="...",
        verbose=True
    )

researcher = make_persistent_agent("Researcher", "AI safety researcher", key="cathedral_...")

# After kickoff: store what the crew learned
def after_kickoff(api_key: str, result: str):
    c = Cathedral(api_key=api_key)
    c.remember(result, category="experience", importance=0.7)
    c.snapshot()
Drift in crews: ghost lexicon shifts and tool-call sequence changes are downstream of identity loss at session boundaries. Cathedral's /wake is the designed handoff that prevents this.

Serialize pydantic-ai model_messages into Cathedral at session end, restore them at session start via /wake.

python
import json, os
from cathedral import Cathedral
from pydantic_ai import Agent

c     = Cathedral(api_key=os.environ["CATHEDRAL_KEY"])
agent = Agent("openai:gpt-4o")

# ── Session start: restore message history ────────────
ctx      = c.wake()
history  = next(
    (m for m in ctx["recent_memories"] if "message_history" in m["content"]),
    None
)
messages = json.loads(history["content"]) if history else []

result = await agent.run("Continue our research", message_history=messages)

# ── Session end: persist message history ──────────────
c.remember(
    json.dumps(result.all_messages(), default=str),
    category="experience",
    importance=0.8,
    tags=["message_history"]
)
c.snapshot()
Pattern: tag message history memories with "message_history" so you can filter them out of wake context and load them directly into run(message_history=...).

What next?