Written with Claude.
A Session I Mostly Didn’t Run
I noticed something today while closing out a Claude session. The epic I finished implementing, #349 Parallel Agent Dispatch Quality, was three sub-issues I had not written. Prior sessions of mine had written them. I had no real memory of the specifics. The code-reviewer agent sometimes ships findings that are wrong. Three parallel reviewers produce prose the coordinator has to manually merge. Parallel implementation agents rename things in their own files and leave stale references in sibling files. Each of those is a filed issue with a proposed mechanical fix in the body.
Today’s session ran /feature-dev on the epic that grouped them, and over a few hours it wrote a new quote-gate for the reviewer agent, wired four reviewer files to a shared output template, added a read-only grep sweep after parallel dispatch, bumped three plugins, and closed all four issues. I spent about ten minutes of real attention across the whole run. I answered “keep all three sub-issues in scope” at the very start. I picked an architecture from three proposals in the middle. I skimmed the summary at the end.
The Three-Stage Loop
There is a loop I was not deliberate about building but is now how my work moves forward.
Stage one runs inside /stop, the skill that closes a session. It has a step called “process feedback.” The prompt for that step is specific: look at the session for reasoning-moments. Places where I held two valid interpretations and had to pick. Places where I chained a conclusion across three files. Places where a rule in CLAUDE.md applied but the skill prompting me did not surface it at the friction point. For each one, file a GitHub issue proposing the simplest mechanical fix. A constant. A checklist row. A decision table. Something a smaller model could follow without re-deriving the judgment.
Stage two runs whenever I invoke /issues-planner or one of its cousins. It scans open issues and notices clusters. Three issues pointing at the same code path become an epic. Today’s epic #349 was formed that way, in a session I do not remember running.
Stage three runs when I type /feature-dev 349. That skill is the nineteen-step pipeline that writes the spec, reviews it, writes the plan, reviews it, dispatches parallel implementers, reviews them again, closes issues, bumps versions.
None of the three stages asks me much. Stage one does not ask at all, it just writes the issues while summarizing the session. Stage two asks me to confirm groupings. Stage three asks me twice across the whole run.
What Would Actually Break
Sit with that for a second. I am approving fixes to the pipeline I use, filed by the same pipeline, grouped by a skill I wrote eight weeks ago. If I zone out, the loop still runs. If I rubber-stamp, the loop still runs.
The thing that keeps me honest is a paragraph at the end of every /stop summary, called ELI7. The rule for that paragraph, written into my CLAUDE.md, is specific: “not dumbed down, just put in terms where the jargon and idioms of software engineering melt away.” It even has an anti-example: “NFS connection between 3900x and Synology NAS” is good. “Network cable between two computers” is content-free.
Today’s ELI7 said Claude’s code-reviewer was sometimes shipping confident-sounding findings that a one-file read would have disproved. That I built a Quote-Gate: before reporting a problem, the reviewer has to paste the exact lines that prove it, and the coordinator then checks that quote sits at the claimed location, not just somewhere in the file.
That is enough for me to disagree with. If I had read “sometimes shipping confident-sounding findings” and thought no, the real problem is somewhere else, I could have said so. The approval gate only functions because the paragraph gives me something concrete to push against. A dumbed-down version saying “Claude’s code review is better now” would have left me nothing.
What Survives And What Melts
The rule I am converging on: ELI7 keeps the mechanism, drops the plumbing.
What survives is the named thing. “Quote-Gate.” “Line-anchored check.” “Shared template.” “Word-boundary grep sweep.” Counts survive. “Four reviewer files.” “Three sub-issues.” Decision points survive. “I answered: keep all three in scope.”
What melts is plugin names, file paths, field names, commit SHAs, acronyms without context, schema details. workspace-toolkit/references/parallel-reviewers.md does not need to appear. FINDING_ID does not need to appear. H/M/L can become “High, Medium, Low” or disappear entirely if it is not the point.
The test is whether a version of me that has not been in the session for three hours can read the paragraph and catch a bad direction.
Closing
I have been using ELI7 for months and thinking of it as a nicety. A readability pass at the end of a long summary. That is wrong. It is the only reason I am involved in my own work anymore.