When Your Reference Implementation Becomes the Real Architecture

The Setup

I built Memory Bank back in September for chungus-net, my home infrastructure repo. It’s a pattern for preserving context across Claude Code sessions - six files that capture what the project is, what just happened, and what’s next.

It worked well. Survived a CrowdSec incident that took down SSH. Survived hostname renames across three Pis. Survived all the chaos that home infrastructure throws at you.

But I didn’t apply it to meap2-it, the system I actually use for client work. Different domain. meap2-it was “working.” And I wanted to see Memory Bank survive more real chaos before trusting it on projects with deadlines.

Christmas gave me the downtime to finally do it. Applied Memory Bank to meap2-it, created universal /start and /stop commands that work across any project. No more maintaining separate session commands for each repo.

The Field Work

Then I did a real PCA project. 48-story office building in downtown SF. 38 pieces of equipment to document.

The field work is basement to roof marathons. Mechanical rooms, elevator machine rooms, rooftops. iPad in hand, spotty connectivity in elevator shafts and electrical rooms. Photos of everything - nameplates, equipment conditions, installation dates.

I went in knowing v3.x needed replacement. The old architecture had too many intermediate files:

CSV + profile.md → COORDINATION.md → cost cards

Every time something changed upstream, you had to manually propagate it downstream. Sync bugs constantly. Too many steps.

What Emerged

Half the changes came from field friction, half from desk work fixing what the field revealed.

Proto8 (the field interface) already produces JSON. Converting that to CSV and markdown profiles felt wrong. So I stopped doing it. Just read directly from building.json.

building.json → cost cards

One file. Everything reads from it, some things write back to it. The sync bugs disappeared because there was nothing to sync.

The Comparison

Areav3.x (Old)v4.x (New)
Data formatCSV + profile.mdbuilding.json single source
Session start/a-dive-in/start (universal)
Session end/a-wrap-up/stop (universal)
Data prep/a-build-profile, /a-close-gaps/a-prepare (reconcile/validate/enrich)
Cost workflow/a-cost only/a-cost/a-cost-validate/a-cost-table

The Workflow Now

flowchart LR
    subgraph PHASE1["Phase 1: Setup"]
        direction TB
        proposal["Proposal"]
        deploy["/a-deploy"]
    end

    subgraph PHASE2["Phase 2: Field Work"]
        direction TB
        proto8["Proto8 Field Interface"]
        export["Export"]
    end

    subgraph PHASE345["Phases 3-5: Processing"]
        direction TB
        building["building.json"]
        prepare["/a-prepare"]
        cost["/a-cost"]
        validate["/a-cost-validate"]
        costtable["/a-cost-table"]
        report["/a-report"]
        condition["/a-condition"]
        execsum["/a-exec-summary"]
    end

    subgraph CENTRAL["meap2-it (Central)"]
        refdata["reference/data/"]
    end

    proposal --> deploy --> proto8 --> export --> building
    building --> prepare --> building
    building --> cost --> validate --> costtable
    building --> report --> condition --> execsum
    refdata -.->|"live reads"| cost
    refdata -.->|"live reads"| report

    style building fill:#c8e6c9,stroke:#2e7d32,stroke-width:3px

building.json sits at the center. The green box. Everything else orbits around it.

Command Sequence

flowchart LR
    subgraph CENTRAL["meap2-it"]
        deploy["/a-deploy"]
        refdata[("reference/data/")]
    end

    subgraph PROJECT["m2-clients/{project}"]
        subgraph P3["Phase 3"]
            prepare["/a-prepare"]
        end

        subgraph P4["Phase 4"]
            cost["/a-cost"]
            validate["/a-cost-validate"]
            costtable["/a-cost-table"]
        end

        subgraph P5["Phase 5"]
            report["/a-report"]
            condition["/a-condition"]
            execsum["/a-exec-summary"]
        end
    end

    deploy --> prepare
    refdata -.->|"live"| cost
    refdata -.->|"live"| report
    prepare --> cost --> validate --> costtable
    costtable --> report --> condition --> execsum

Repository Structure

flowchart LR
    subgraph MEAP["meap2-it"]
        direction TB
        deploy_src["reference/deploy/<br/>commands, agents"]
        data_src["reference/data/<br/>cost-database<br/>wordbanks"]
    end

    subgraph M2["m2-clients/{project}"]
        direction TB
        analysis["analysis/building.json"]
        cards["cards/"]
        claude[".claude/ (frozen)"]
    end

    deploy_src -->|"copy once"| claude
    data_src -.->|"live reads"| cards

Projects get frozen copies of commands and agents at deploy time. But reference data (cost databases, wordbanks) is always read live from central. I can update pricing without redeploying every project.

The Discovery

After the project wrapped, I looked at what I’d been using vs what was in the central repo. The client project had drifted into a completely different architecture.

My reaction: “Huh, this is better.”

Not “oh no, tech debt.” The drift was an improvement. Battle-tested over a real project with real deadlines.

What Got Promoted

11 commands and 5 agents moved from the client project back to central:

Commands: a-cost, a-prepare, a-report, a-condition, a-exec-summary, a-finalize, a-cost-validate, a-cost-table, a-cost-import, a-report-table, a-housekeep

Agents: pca-cost-agent, pca-report-agent, pca-report-reviewer-agent, pca-photo-agent, cruft-scanner-agent

The old v3.x versions got archived to archive/v3.x-obsolete/. Preserved for reference, but no longer deployed to new projects.

The Pattern

Client project discovers improvement
Document in backport-tracker.md
Validate in master (meap2-it)
Deploy to other clients as needed

The frozen deployment model makes this safe. Existing projects keep their copies. New deployments get the updates. No forced upgrades, no breaking changes in the field.

What I Learned

Memory Bank needed real chaos. Three months of home infrastructure incidents proved it worked. Wouldn’t have trusted it on client work otherwise.

Universal commands pay off. Same /start and /stop everywhere means no context switching between projects. Worth the upfront work to consolidate.

Single source of truth actually works. The v3.x coordination file approach seemed reasonable but created constant sync issues. One file that everything reads from eliminated an entire class of bugs.

Client work is the real test. The central repo had v3.x. The client project evolved v4.x under deadline pressure. The client version was better because it had to be.


Written with Claude.