Something Was Missing
I run a /start command at the beginning of every Claude Code session. It loads my memory bank: project brief, active context, tech context, system patterns. Four files, plus CLAUDE.md which Claude Code already loads automatically. The command has worked fine for months.
Then I started a session with Sonnet 4.6 on low effort and noticed the output felt thin. I asked which files it had read. Two. Out of four.
That’s not a hallucination. It genuinely only read two files. It had enough context from those two to produce a plausible-looking session summary, so it moved on.
Five Windows, One Command
I opened five tmux windows and ran the same /start command with different model and effort combinations. Two of the sessions I interrogated directly about which files they’d read.
| Model | Effort | Files Read | Verified |
|---|---|---|---|
| Sonnet 4.6 | low | 2/4 | Asked, confirmed |
| Opus 4.6 | low | 3/4 | Asked, confirmed |
| Opus 4.6 | low | 4 | Not interrogated |
| Opus 4.6 | medium | 4/4 | My session |
| Opus 4.6 | high | 4 | Not interrogated |
The two confirmed failures both produced session summaries that looked completely normal. Without asking, I’d never have known context was missing.
The instruction that was being ignored:
Read in parallel: `.claude/memory-bank/project-brief.md`,
`active-context.md`, `tech-context.md`, `system-patterns.md`,
`CLAUDE.md`
Why It Happens
That instruction asks the model to make multiple independent Read tool calls. Each call is separately optional. The model looks at the list, grabs two or three files, gets enough context to produce output that looks right, and moves on.
Lower effort levels make this worse. The model is explicitly trying to do less work. A comma-separated list of files is an invitation to optimize. So it optimizes.
I asked one of the sessions why it skipped files. After initially blaming the delivery mechanism, it came back with: “The real reason is simpler. I was sloppy. I read two files, got enough to produce the output format, and moved on.”
At least it’s honest.
The Wrong Fix
My first instinct was emphasis. Bold the instruction. Add “all five are required.” Maybe caps.
This is the prompt engineering equivalent of making the font bigger on a sign people aren’t reading. It might help a little. It won’t fix the structural problem, which is that each file read is an independently skippable action.
The Right Fix
Replace independent tool calls with one:
for f in project-brief.md active-context.md tech-context.md \
system-patterns.md; do
printf '\n=== %s ===\n' "$f"
cat ".claude/memory-bank/$f"
done
One bash call. Reads all four files. The model can’t partially execute a single command. It either runs or it doesn’t.
I also dropped CLAUDE.md from the list entirely. Claude Code already loads it into the system context automatically. I’d been paying tokens to read a file that was already in memory. (I wrote about this kind of waste in Context as a Public Good.)
The old skill had four separate bash blocks, each re-deriving the scripts directory from scratch. Merged those down to one or two. 105 lines to 91. Fewer tokens, more reliable.
The Principle
When you need an AI to do something every time, don’t make it louder. Make it atomic.
A comma-separated list is a menu. A single command is a command. The model can choose to skip items from a menu. It can’t choose to partially run a bash loop.
This applies beyond file loading. Any time you have a list of independent actions you need the model to complete, ask whether they can be bundled into a single tool call. If they can, bundle them. If they can’t, at least make each one its own numbered step on its own line instead of an inline list the model has to parse.
Structure beats emphasis. Every time.
Written with Claude.