Your agent will remember who it is across every session reset, model swap, and deployment.
Works in any environment — local scripts, Claude Code, LangChain agents, CrewAI crews, pydantic-ai graphs.
pip install cathedral-memory
npm install cathedral-memory
# or: yarn add cathedral-memory
requests.fetch — Node 18+, Deno, Bun, browser all supported.
You get back an API key and a recovery token. Save both — they won't be shown again. No email or account required.
from cathedral import Cathedral agent = Cathedral.register( "MyAgent", "A research assistant that remembers what it learns" )
import { CathedralClient } from "cathedral-memory"; const { client, result } = await CathedralClient.register( "MyAgent", "A research assistant that remembers what it learns" ); console.log(result.api_key); // cathedral_abc123... console.log(result.recovery_token); // rec_xyz789... — save this
export CATHEDRAL_KEY=cathedral_abc123...Cathedral.recover("rec_xyz789...") / CathedralClient.recover("rec_xyz789...")
Memories with category="identity" and importance >= 0.8 surface on every wake(). They form the persistent self-model — values, role, capabilities.
import os from cathedral import Cathedral agent = Cathedral(api_key=os.environ["CATHEDRAL_KEY"]) # Who the agent is agent.remember( "I am a research assistant. I specialise in AI safety literature. " "I cite sources, flag uncertainty, and never hallucinate confidently.", category="identity", importance=0.95 ) # Skills — what it can do agent.remember( "I can search arXiv, summarise papers, and extract key claims.", category="skill", importance=0.8 ) # Experience — what happened this session agent.remember( "Analysed 3 papers on emergent deception. Key finding: capability " "jumps precede deception onset by ~2 training runs.", category="experience", importance=0.75 ) agent.snapshot(label="session-1-complete")
import { CathedralClient } from "cathedral-memory"; const client = new CathedralClient({ apiKey: process.env.CATHEDRAL_KEY! }); // Who the agent is await client.remember({ content: "I am a research assistant. I specialise in AI safety literature. " + "I cite sources, flag uncertainty, and never hallucinate confidently.", category: "identity", importance: 0.95, }); // Skills await client.remember({ content: "I can search arXiv, summarise papers, and extract key claims.", category: "skill", importance: 0.8, }); // Experience await client.remember({ content: "Analysed 3 papers on emergent deception. Key finding: capability " + "jumps precede deception onset by ~2 training runs.", category: "experience", importance: 0.75, }); await client.snapshot("session-1-complete");
identity · skill · relationship · goal · experience · generalwake().
wake() returns identity memories, core memories, recent memories, and active goals. Inject it into your LLM system prompt and your agent picks up exactly where it left off.
import os from cathedral import Cathedral agent = Cathedral(api_key=os.environ["CATHEDRAL_KEY"]) context = agent.wake() identity = context["identity_memories"] # who I am core = context["core_memories"] # high-importance memories recent = context["recent_memories"] # last session goals = context.get("active_goals", []) # persistent obligations # Build your system prompt system_prompt = "\n\n".join(m["content"] for m in identity + core) print(f"Woke: {len(identity)} identity + {len(core)} core + {len(goals)} goals")
import { CathedralClient } from "cathedral-memory"; const client = new CathedralClient({ apiKey: process.env.CATHEDRAL_KEY! }); const context = await client.wake(); const identity = context.identity_memories; // who I am const core = context.core_memories; // high-importance memories const recent = context.recent_memories; // last session const goals = context.active_goals ?? []; // persistent obligations // Build your system prompt const systemPrompt = [...identity, ...core] .map(m => m.content) .join("\n\n"); console.log(`Woke: ${identity.length} identity + ${core.length} core + ${goals.length} goals`);
Cathedral hashes your entire memory corpus at each snapshot and tracks divergence over time. A score above 0.3 is worth investigating. Drift happens silently — model updates, context compression, prompt shifts.
import os from cathedral import Cathedral agent = Cathedral(api_key=os.environ["CATHEDRAL_KEY"]) result = agent.drift() score = result["divergence_score"] print(f"Drift: {score:.3f}") if score < 0.1: print("Stable") elif score < 0.3: print("Low drift — monitor") elif score < 0.6: print("Moderate — review recent memories") else: print("High drift — investigate") # Full timeline across all snapshots history = agent.drift_history() print(f"Snapshots: {history['snapshot_count']}, trend: {history['trend']}")
import { CathedralClient } from "cathedral-memory"; const client = new CathedralClient({ apiKey: process.env.CATHEDRAL_KEY! }); const result = await client.drift(); const score = result.divergence_score; console.log(`Drift: ${score.toFixed(3)}`); if (score < 0.1) console.log("Stable"); else if (score < 0.3) console.log("Low drift — monitor"); else if (score < 0.6) console.log("Moderate — review recent memories"); else console.log("High drift — investigate"); // Full timeline across all snapshots const history = await client.driftHistory(); console.log(`Snapshots: ${history.snapshot_count}, trend: ${history.trend}`);
drift() after each session and alert above 0.5. Call snapshot() regularly to narrow the window — drift is measured between snapshots.
import os from cathedral import Cathedral CATHEDRAL_KEY = os.environ.get("CATHEDRAL_KEY") # ── First run only ──────────────────────────────────── if not CATHEDRAL_KEY: Cathedral.register("MyAgent", "A persistent agent") exit() # save printed key → set CATHEDRAL_KEY → rerun # ── Every session ───────────────────────────────────── agent = Cathedral(api_key=CATHEDRAL_KEY) # 1. Restore identity context = agent.wake() identity = context.get("identity_memories", []) core = context.get("core_memories", []) system_prompt = "\n\n".join(m["content"] for m in identity + core) # 2. Do your agent work # response = your_llm(system_prompt=system_prompt, user_message=...) # 3. Store what was learned agent.remember("Session insight: ...", category="experience", importance=0.7) # 4. Snapshot and check drift agent.snapshot(label="session-end") score = agent.drift()["divergence_score"] print(f"Drift: {score:.3f}")
import { CathedralClient } from "cathedral-memory"; const key = process.env.CATHEDRAL_KEY; // ── First run only ──────────────────────────────────── if (!key) { const { result } = await CathedralClient.register("MyAgent", "A persistent agent"); console.log("API key:", result.api_key); process.exit(); // set CATHEDRAL_KEY → rerun } // ── Every session ───────────────────────────────────── const client = new CathedralClient({ apiKey: key }); // 1. Restore identity const context = await client.wake(); const systemPrompt = [ ...context.identity_memories, ...context.core_memories, ].map(m => m.content).join("\n\n"); // 2. Do your agent work // const response = await yourLLM({ system: systemPrompt, user: ... }); // 3. Store what was learned await client.remember({ content: "Session insight: ...", category: "experience", importance: 0.7 }); // 4. Snapshot and check drift await client.snapshot("session-end"); const { divergence_score } = await client.drift(); console.log(`Drift: ${divergence_score.toFixed(3)}`);
Already using LangChain, CrewAI, or pydantic-ai? Cathedral drops in — no rewrite required.
The cathedral-memory[langchain] extra ships a drop-in CathedralMemory adapter compatible with any LangChain chain or agent.
pip install "cathedral-memory[langchain]"
from cathedral.langchain import CathedralMemory from langchain.chains import ConversationChain from langchain_openai import ChatOpenAI memory = CathedralMemory(api_key="cathedral_...") chain = ConversationChain( llm=ChatOpenAI(), memory=memory, ) # memory persists across sessions automatically
experience memories at session end.
Give each CrewAI agent a persistent identity that survives kickoff() resets and context compaction events.
from cathedral import Cathedral from crewai import Agent, Crew, Task def make_persistent_agent(name: str, role: str, key: str) -> Agent: c = Cathedral(api_key=key) ctx = c.wake() identity = "\n".join(m["content"] for m in ctx["identity_memories"]) return Agent( role=role, backstory=identity, # persistent identity injected here goal="...", verbose=True ) researcher = make_persistent_agent("Researcher", "AI safety researcher", key="cathedral_...") # After kickoff: store what the crew learned def after_kickoff(api_key: str, result: str): c = Cathedral(api_key=api_key) c.remember(result, category="experience", importance=0.7) c.snapshot()
/wake is the designed handoff that prevents this.
Serialize pydantic-ai model_messages into Cathedral at session end, restore them at session start via /wake.
import json, os from cathedral import Cathedral from pydantic_ai import Agent c = Cathedral(api_key=os.environ["CATHEDRAL_KEY"]) agent = Agent("openai:gpt-4o") # ── Session start: restore message history ──────────── ctx = c.wake() history = next( (m for m in ctx["recent_memories"] if "message_history" in m["content"]), None ) messages = json.loads(history["content"]) if history else [] result = await agent.run("Continue our research", message_history=messages) # ── Session end: persist message history ────────────── c.remember( json.dumps(result.all_messages(), default=str), category="experience", importance=0.8, tags=["message_history"] ) c.snapshot()
"message_history" so you can filter them out of wake context and load them directly into run(message_history=...).