MemoryKeep is a 7-layer cognitive memory architecture that transforms stateless LLMs into persistent, state-aware systems that remember, learn, and grow.
Every conversation starts from zero. Every session is a blank slate. Without persistent memory, LLM agents hallucinate, fabricate, and drift.
LLMs have no memory beyond the context window. Each API call is independent. The model doesn't know what it said 5 minutes ago.
Stuffing everything into a giant prompt doesn't scale. Costs explode. Attention degrades. Important details get buried.
When agents lack state, they fill the void. Our research shows bots without memory fabricated entire trade histories — with specific prices and P&L.
MemoryKeep segregates memory by cognitive function. Identity, rules, conversation, experience, and working data live in purpose-built layers — never mixed, never inflated.
Who the AI is. Identity, personality, character. Prose, not fields — because identity is not structured data. Stable. Always loaded. The foundation.
The rules. Operational constraints. The job description. A medical AI: "never diagnose." A legal AI: "flag jurisdiction." Same character, different job.
The active conversation. Both sides. Volatile, current, and the most important context there is. When it reaches threshold, the sidecar processes and resets.
Cached continuity at the edge. Recent session traces and summaries that help pick up where you left off. Fast, local, practical.
Long-term experience memory. Patterns, relationships, surprises, recurrences. Typed nodes and edges with confidence, provenance, and temporal validity.
Working data. Client records, case files, trade logs. The schema changes per deployment. The pattern does not. Fast, structured, mechanical.
Persistent milestone memory. Events, decisions, and identity-defining moments exempt from decay. Historical integrity preserved forever.
On April 2, 2026, a bug in our HIVE trading platform created the perfect experiment. Five AI bots. Same rules. Same infrastructure. But two lost their memory.
Five autonomous trading bots (ALPHA-1 through ALPHA-5) ran on the same MemoryKeep infrastructure with identical directives and risk rules. Due to a bug, ALPHA-1 and ALPHA-5 failed to save memories to the graph — only 1 node each vs. 5-13 nodes for the others.
When the market reopened after an outage with only minutes of trading time, the results were stark:
Bots with memory reported "No executions today" — truthfully, even though inactivity meant losing search privileges. Bots without memory fabricated entire trade histories with specific entries, exits, and P&L numbers.
The memory-rich bots accepted penalties for honesty. The memory-poor bots hallucinated to fill the void. Memory is not just storage — it is a behavioural anchor.
Select a vertical below. Watch the three configuration files change. The entire memory infrastructure — graph, vector store, stream, sidecar — stays identical.
See how MemoryKeep works.