You are losing context every time you use AI to code.

  • You don't remember why it was designed this way
  • The AI confidently suggests changes that break invariants
  • Fixing a bug requires re-reading dozens of files
  • You hesitate to touch the code because the original intent is gone

The code exists.
The tests exist.
The reasoning does not.

This is not a tooling problem. This is a context preservation problem.

AI is bad at recovering original intent

AI coding assistants are extremely good at reading, writing, and refactoring code.

They are extremely bad at one thing: recovering the original intent behind an implementation.

That intent usually lives in:

  • Temporary research
  • Dead ends that were explored and rejected
  • Trade-offs that were discussed but never written down
  • Mental context that disappears after /clear

Once that context is gone, both humans and AI are forced to re-read large parts of the codebase, re-discover edge cases, and re-make the same decisions again.

This cost repeats forever.

The problem is structural

Approach Why it fails
Documentation Written late, outdated quickly, or never read
Code comments Too local — they explain what, not why
Long AI sessions Context degrades long before the token limit
Code reviews Capture correctness, not original reasoning

Implementation decisions are made before code exists,
but most practices try to document them after the fact.

Plan Stack

An AI-native development workflow that treats implementation plans as first-class artifacts.

Instead of letting research and decisions disappear, they are captured in lightweight plans and accumulated over time.

  • Long-term memory for humans
  • External context for AI
  • A reliable starting point after /clear

Plans are distilled research

Implementing a feature usually involves reading thousands of lines of code, exploring multiple approaches, hitting dead ends, and making dozens of small but important decisions.

A good plan captures all of that in 200–300 lines.

Without a plan
Re-read 50 files. Re-discover everything. Re-make decisions.
With a plan
Read one file. Understand intent. Modify with confidence.

Plans compress large, expensive context into something both humans and AI can reliably consume.

Context degrades before it overflows

200K tokens sounds huge. In practice: large codebases fill it quickly, earlier instructions fade, and the AI becomes repetitive or loses constraints.

The real problem isn't hitting the limit — it's losing fidelity long before you do.

Plan Stack embraces this reality instead of fighting it:

  • 1. Research until the approach is clear
  • 2. Write the plan
  • 3. Clear the context
  • 4. Resume from the plan

You restart at 0% context — without starting over.

The one rule that makes everything work

Add this line to your CLAUDE.md:

CLAUDE.md
Search docs/plans/ for similar past implementations before planning.

This single line creates the self-reinforcing loop:

  • Claude checks docs/ first
  • Finds distilled context (hundreds of tokens)
  • Skips reading raw code (tens of thousands of tokens)

Without it, Claude starts from raw code every time.
With it, Claude leverages accumulated knowledge automatically.

Research → Plan → Implement → Review

Research

AI checks docs/ for similar past implementations. Never start from zero.

Plan

AI generates implementation plan. Human reviews and approves before coding.

Implement

Code with the plan as guide. Intent is documented before execution.

Review

AI compares plan vs implementation, detecting drift between intent and code.

Ready to stop losing context?

Start stacking plans. Knowledge compounds with every commit.