Three spaces of context
These days, when I start a new project, my thinking lives in an Obsidian vault – an organized chaos of raw notes, clippings, and wiki pages. Then, when it's time to execute, I switch to VS Code, GitHub, and Claude Code.
One basic question I kept asking: how should the vault, the repo, and the LLM actually relate? After experimenting with pointing the LLM at the vault, copying notes into the repo, pasting backstory into system prompts, I've landed on one setup: three spaces, each with a clear job and a clean handoff.
Three spaces

Space #1: vault
My thinking space. It's for me. Ideas live here before they're ready. I'm allowed to contradict myself. Notes can be half-finished, hesitant, or wrong.
The agent doesn't read this. Not directly, not via symlink, not via "let me just point you at this folder". The vault stays on my side.
Space #2: repo's docs/
The building space. It's for the project – for me in build mode, for Claude Code, for anyone who contributes. It optimizes for clarity and stable references, not associative thinking.
This may include:
- A spec – what the thing is and how it works.
- A brand doc – how it presents itself.
- ADRs (architecture decision records) – one page per non-trivial decision, with context, decision, and consequences.
- Research – expensive-to-regenerate findings, captured once and compressed.
Space #3: index
A short file at the repo root. For me, CLAUDE.md; for other tools, AGENTS.md , cursor/rules/*.mdc, etc. It points at canonical docs and nothing more. Readable by human in 30 seconds.
It doesn't duplicate the spec. It doesn't paste in the brand doc. It says: "the spec is over there; the decisions are over there; the brand is over there".
Lesson
What I kept getting wrong was conflating "what I've thought about" with "what the LLM should see". When I pointed Claude Code at notes still in motion, suggestions were poor as it was pattern-matching on my uncertainty.
The third space exists because context windows are limited and every interaction loads them. If the agent re-reads a long spec each session:
- Token cost climbs.
- The spec dominates context, crowding out the actual code.
- The LLM pattern-matches on spec phrasing instead of solving the problem in front of it.
The index solves this. It's the table of contents. The agent reads it always, the deeper docs only when relevant.
Crossing ritual
The interface between space 1 and space 2 is crossing. A note is ready to cross when:
- The idea has stopped moving.
- I can write it for someone who isn't me.
- It's worth being version-controlled.
When I cross a note:
- Copy the contents into the appropriate
docs/file. - Edit for a different audience (no inside expression, no stale links, no maybe-language).
git commit.- In the vault, leave a pointer:
crossed → repo/docs/spec.md#section.
The vault note can keep evolving - it might become a blog post, feed another project. But once crossed, the repo version is canonical. Crossing is a one-way transfer.
What this gives me
- A clean separation between thinking and shipping. I can be messy in the vault without that messiness leaking into the repo.
- Decision durability. ADRs mean I don't re-surface settled questions. "Should we go with x instead of y?" was a real decision; it's now ADR 0001.
- A short index that stays short. If
CLAUDE.mdgrows past ~200 lines, something belongs indocs/instead. - Privacy by default. Personal notes don't accidentally land in a public repo because the boundary is explicit.
A heuristic I'm using
When I'm not sure whether something belongs in the vault or in docs/, I ask:
Would I be comfortable if this exact text appeared verbatim in the GitHub repo's commit history, forever?
If yes, I can commit it. If no, it stays in the vault.
Closing thought
The three spaces aren't really about AI, even though that's what made me sit down and articulate them. They're about respecting the difference between how we think and what we ship. AI assistants are specially allergic to bad context, but the same shape holds for human collaborators.
Built with AI, not by AI.