The Memento Problem
Your AI agents wake up every session with no memory of what happened five minutes ago. The fix isn't a bigger brain. It's a better system.
There’s a scene in Christopher Nolan’s Memento that I think about constantly when working with AI agents.
Leonard, the protagonist, wakes up in a motel room. He doesn’t know how he got there. He doesn’t remember checking in, doesn’t remember the drive, doesn’t remember anything from the past few hours. But he’s not confused. He’s not panicking. He calmly checks his pockets, reads the notes he left himself, examines the Polaroids with handwritten captions, and pieces together enough context to act.
Leonard has anterograde amnesia. He can’t form new memories. Every few minutes, his slate is wiped clean. And yet he functions. He pursues complex goals across days and weeks. He adapts to new information. He makes progress.
How?
Not by having a better brain. By having a better system.
The Amnesia Everyone Ignores
Here’s the uncomfortable truth about AI agents: they’re all Leonard. Every single one of them wakes up with no memory of what happened five minutes ago. They have tremendous capability and absolutely no continuity.
Most teams respond to this by trying to give their agents bigger brains. Longer context windows. More parameters. Better reasoning. This is like trying to cure Leonard’s condition by teaching him speed-reading. It misses the point entirely.
The failure modes are predictable. First: the agent tries to do everything at once. It attempts to complete the entire project in a single session, like Leonard trying to solve the murder in one caffeine-fueled night. It runs out of context mid-implementation, leaving the next session to inherit half-finished code and no documentation.
Second: the agent declares victory prematurely. It looks around, sees that some progress has been made, and announces that the job is complete. Without external verification, it has no way to know whether it’s actually done or just feels done. Leonard would recognize this problem. It’s why he tattooed the crucial facts on his body.
Polaroids and Tattoos
Leonard’s system has layers. Some information is ephemeral, written on Polaroids that might get lost. Other information is permanent, tattooed on his skin. The hierarchy matters.
This maps directly to how you should design agent memory:
Tattoos (persistent, verified facts): Project architecture decisions. Tested and passing acceptance criteria. Schema definitions. These go in files that survive across sessions.
Polaroids (working context): Current task progress. Recent decisions and their rationale. Blockers encountered. These go in session state that the next agent can read.
Mental notes (ephemeral reasoning): Chain-of-thought during implementation. These can disappear, because the artifacts they produce (code, tests, docs) carry the important bits forward.
The mistake most teams make is treating everything like mental notes. The agent reasons brilliantly, produces good work, then the session ends and all that context evaporates. The next agent starts from scratch, re-discovers the same constraints, makes the same initial mistakes, and wastes half its context window getting back to where the previous session left off.
The System Is the Intelligence
Leonard is not smart despite his amnesia. He is effective because of his system. The system compensates for what his biology cannot provide.
Your agents need the same thing. Not bigger context windows, but external scaffolding that makes context windows less critical. Not better reasoning, but better handoff protocols that preserve the conclusions of previous reasoning.
The teams that get this right treat each agent session like a shift change at a hospital. The outgoing nurse doesn’t just walk away. There’s a structured handoff: what’s been done, what’s pending, what to watch for, what the patient said that might matter later.
Build that handoff into your agent architecture. Every session should begin by reading what the previous session left behind. Every session should end by writing what the next session needs to know.
For Those Who Build
Design your agent systems around the amnesia, not against it. Accept that every session starts cold. Make cold starts fast and well-informed.
Separate your storage layers. Permanent decisions go in durable files. Session context goes in structured handoff documents. Ephemeral reasoning can stay in the conversation.
Verify externally. An agent cannot reliably assess its own completeness. Use acceptance criteria, test suites, and explicit checklists that exist outside the agent’s memory.
Keep sessions focused. A single, clear objective per session outperforms an ambitious multi-goal session that runs out of context halfway through.
Leonard solved a murder with fifteen-minute memory. Your agents can ship features with the same approach. The secret was never intelligence. It was architecture.