Written with Claude.

Hero image: repos connected by lightning bolts, with a figure skating between them

The wall I keep hitting

I’m building a field data collection app. SvelteKit PWA, offline-first, syncs equipment photos and condition data from a tablet back to a server where the analysis pipeline picks it up. Standard tooling for my day job as a mechanical engineer doing facility audits, property condition assessments, and energy assessments.

Except the app runs on a remote server. And the server needs ports open. And the ports need firewall rules. And the firewall rules live in a completely different repo managed by a completely different set of conventions. So I’m sitting in my field interface session, everything working locally, and I try to access the dev page from my browser. Nothing. The port is blocked.

I open another tmux window using my TUI launcher, pick the infrastructure repo, wait for /start to load its context. Diagnose the problem (UFW is blocking port 64444). Open the port. Verify it works. Close that window. Go back to the field interface session. Try to remember where I was.

That context switch costs 10 to 15 minutes every time. Not because the fix is hard. Because loading a different domain’s full context into a session, doing one thing, and going back is expensive. And it happens constantly, because almost nothing I build lives in a single repo. The field interface is a SvelteKit PWA with IndexedDB and equipment data models. The infrastructure repo manages Pi-hole, WireGuard, Home Assistant, and backups across a 17-device home network on a Raspberry Pi. The plugin toolkit is a marketplace of development workflow plugins with a 4-step deployment pipeline across multiple machines. Three different codebases, three different domains, and they depend on each other constantly.

What changed today

I built a skill called /ask-expert. When a session hits something outside its own domain, instead of me opening another tmux window, the session spawns an expert in a split pane right next to it. The expert is a Claude Code instance that starts in the target repo, loads that repo’s full memory bank and specs and CLAUDE.md, and has complete agency to act in its own domain. The whole conversation happens through a single consultation file: the caller writes the request, the expert appends its response, follow-ups append further. One file, one thread.

How does it know where to route? There’s a workspace-level CLAUDE.md that every session in every repo under the workspace loads automatically. It has a routing table baked into every conversation:

## Issue Routing

When filing issues with `/create-issue`, use the repo responsible for the work:

| Work type                                                      | Repo                        |
|----------------------------------------------------------------|-----------------------------|
| Generic plugin/skill/agent bugs (`/workspace-toolkit:*`, etc.) | `chuggies510/ChuggiesMart`  |
| PCA commands (`/a-*`), agents, field interface, cost database   | `chuggies510/meap2-it`      |
| PCA client projects (anson, la-posada, optima-sonoran, etc.)    | `chuggies510/{project-name}`|
| TUI launcher, session management                                | `chuggies510/chuggies`      |
| Home network infra (Pi-hole, WireGuard, monitoring)             | `chuggies510/chungus-net`   |
| Home Assistant config, automations, entities                    | `chuggies510/home-assistant` |
| Podcast ad removal (Podly)                                      | `chuggies510/podly-local`   |
| Blog (blog.chughes.co)                                          | `chuggies510/chungus-blog`  |
| Engineering report peer reviews                                 | `chuggies510/peer-reviews`  |
| Telegram bot (OpenClaw)                                         | `chuggies510/openclaw`      |

The skill reads the table, picks the repo, and spawns the expert there. New repo joins the ecosystem, it gets a row.

Two things use this same routing table. /create-issue files an issue in the right repo for that repo to handle in its own /start/stop workflow later. That’s the path for bigger work: refactors, features, things that need their own session. /ask-expert gets approval and tooling right now, inline, without leaving your current work. That’s the path for things blocking you in the moment. Same table, same routing, different urgency.

That port block is what motivated building /ask-expert. The field interface session shouldn’t need to know how UFW works. Next time it hits a blocked port, it asks the infrastructure expert instead of waiting for me to switch windows.

What an expert consultation actually looks like

The infrastructure session wanted to wire ask-expert into the session lifecycle: have /start scan for recent consultations, have /stop check for follow-through. Seven proposed changes across four files in two plugins. It asked the ChuggiesMart expert for scope approval.

Here’s the expert’s response, as it appeared in the terminal:

Expert response is in. Here's the full review:

  APPROVED (4 items):
  1. ask-expert: validation gate, resolved path, back-and-forth
     (keep markdown, not JSONL)
  2. start: consultation scan after bootstrap
  3. stop: consultation detection in Phase 2
  4. check-expert-consultations.sh shared script

  REJECTED (3 items):
  1. JSONL format: rejected. Markdown is human-readable,
     grep-friendly, git-diff-friendly. JSONL is for machine
     pipelines, not consultation logs.
  2. feature-dev mandatory gate: rejected. Cross-plugin coupling.
     feature-dev's "Step -1: Challenge the Request" already
     catches domain concerns. Adding a gate penalizes the 95% case.
  3. build-dev mandatory gate: rejected for same reasons, plus
     it's a command not a skill.

  Key callouts:
  - Version discrepancy: commit messages say v2.56.0 but
    plugin.json is at 2.50.0. Need to sort that out.
  - Pane cleanup is already in the skill. If it's not working,
    fix the existing code.
  - One version bump for all changes, not per-file.
  - Must run /a-review before committing,
    /verification-before-completion before claiming done.

Seven changes proposed. Four approved. Three rejected with reasons. The expert didn’t just say yes or no, it explained why JSONL was the wrong format for this use case, why mandatory gates in other plugins create coupling, and why the existing pane cleanup should be fixed rather than duplicated. It caught a version discrepancy the caller didn’t notice. And it prescribed the exact tooling: Edit tool, one version bump, sync script, quality review, verification.

The expert didn’t give the volunteer write permissions. It gave them the tools the repo already built, with the correct workflows baked in, so the volunteer’s outcome conforms to the expert’s standards.

How it’s wired

flowchart TB
    START["/start<br/>load memory bank, issues, streams"]
    
    subgraph WORKFLOW["/feature-dev or /build-dev"]
        CHALLENGE["Challenge the Request<br/>Do we need this? Right scope?<br/>Cross-domain?"]
        BUILD["discover → architect → build"]
        DISPATCH["Parallel Dispatch<br/>file ownership maps<br/>forbidden-files per agent"]
        REVIEW["Quality loop<br/>3 reviewers → fix → retest"]
    end

    subgraph EXPERT["Cross-Domain Consultation"]
        ROUTE["Route via CLAUDE.md<br/>issue routing table"]
        SPAWN["Spawn expert in target repo<br/>tmux split, full context load"]
        CONSULT["Expert acts in own domain<br/>consultation file: one thread"]
        RETURN["Response + free audit<br/>issues filed in expert's repo"]
    end

    CLOSE["/close-issues<br/>per-issue: read → fix → verify → commit"]
    STOP["/stop<br/>memory bank update, session archive"]

    START --> CHALLENGE
    CHALLENGE -->|"same domain"| BUILD
    CHALLENGE -->|"cross-domain"| ROUTE
    ROUTE --> SPAWN --> CONSULT --> RETURN
    RETURN -->|"unstuck"| BUILD
    BUILD --> DISPATCH --> REVIEW --> CLOSE --> STOP

Three layers that connected today

/feature-dev for new code. /build-dev for infrastructure. Parallel agents with file ownership guards so they don’t step on each other. A skepticism gate that asks “do we actually need this?” before any work begins. Verification loops that read back every file after writing it. All per-repo. Blind across domains until today.

Per-workflow discipline. Before any agent is dispatched, the workflow challenges the request and maps file ownership:

Step -1: Challenge the Request
1. Do we actually need this?
2. Is this the right scope?
3. Is this the right approach?
4. What are we NOT going to do?
5. Are we in our own domain?
Parallel Dispatch — file ownership cross-check:
  Chunk A: writes src/auth.ts, src/auth.test.ts
  Chunk B: writes src/api.ts, src/api.test.ts

  Chunk A forbidden: src/api.ts, src/api.test.ts
  Chunk B forbidden: src/auth.ts, src/auth.test.ts

If any file appears in more than one agent’s list, they don’t run in parallel. After agents return, the actual file changes are diffed against the declared ownership. Out-of-boundary writes get reverted.

Cross-repo awareness. /ask-expert lets any session consult any domain. The expert loads the target repo’s full memory bank, specs, and CLAUDE.md before acting. From the bootstrap routing consultation:

What I Did

Nothing. Both changes are file edits in this repo.
Per safety rules, I advise and prescribe tools but
do not edit files on the caller's behalf.

Observability. A PostToolUse hook will log every skill and agent invocation to ~/.chuggiesmart/telemetry.jsonl:

{"ts":"2026-04-11T14:32:00Z","tool":"Skill","name":"workspace-toolkit:start","project":"meap2-it"}
{"ts":"2026-04-11T14:33:12Z","tool":"Skill","name":"feature-dev:feature-dev","project":"meap2-it"}
{"ts":"2026-04-11T14:35:44Z","tool":"Skill","name":"workspace-toolkit:ask-expert","project":"meap2-it"}

Which tools get used, how often, by which projects. Not built yet, just filed as an issue. Last piece.

What it actually looks like

Six tmux windows open. ChuggiesMart (the plugin toolkit), chungus-net (infrastructure), meap2-it (the field interface), an active client project, and two ephemeral sessions for expert consultations.

The field interface session is designing a dashboard merge. It needs to serve a page. It asks the infrastructure expert. Port opens. Session keeps moving. The infrastructure expert notices stale rules while it’s there, files issues in its own repo. The plugin toolkit session is shipping ownership guards and a new skill. The expert consultation session comes back to the toolkit to refine the ask-expert skill itself. Nothing waiting on me to switch windows.

Human as decider, not connective tissue

Before /ask-expert, every cross-domain dependency routed through me. meap2-it needed a port open, so I opened a chungus-net session, fixed it, went back. meap2-it needed a dashboard deployed, so I opened ChuggiesMart, ran the deploy skill, went back. I wasn’t making decisions. I was a router.

What I actually want is human in the loop as a decider. The session handles routing, context loading, domain handoff. I show up when there’s a judgment call: should we open this port? Is this the right architecture? Do we ship this?

But you can’t decide on work you don’t understand. I’m a mechanical engineer, not a software engineer. Every major step in my workflows produces an ELI7, a plain-language summary alongside the technical output. Here’s what the session summary looked like after building ask-expert:

ELI7: We taught Claude sessions to ask other Claude sessions
for help. When a session working on the MEAP portal needs
network help, it spawns a chungus-net expert that knows the
firewall rules, the port assignments, and the SSH config.
The expert checks things out, opens ports if needed, and
tells the caller exactly which tools to use. The consultation
gets saved so the repo maintainer can see what happened.
It's like having a specialist on call who brings their own
notes and writes down what they did.

That’s not dumbed down. It’s the same system described in terms I can reason about. I can read that and ask “wait, should the expert be opening ports without asking me?” That question led to the safety tiers. I can push back, redirect, and make decisions because the concepts are explained simply alongside the implementation. The simple version isn’t a summary for outsiders. It’s how I stay in the loop.

Friction is directional

You can’t just remove friction and call it fast. Friction has a direction. One kind slows you down on the right path. The other keeps you off the wrong path. Remove the first kind, add the second kind.

Friction against doing the right thing:

# I need a port open. This takes 10-15 minutes.
1. Open TUI launcher
2. Pick chungus-net
3. Wait for /start to load memory bank, issues, streams
4. "UFW is blocking port 64444"
5. sudo ufw allow 64444
6. Verify
7. /stop
8. Go back to meap2-it window
9. Try to remember where I was

Friction against doing the wrong thing:

# Before any work begins, the workflow asks:
Step -1: Challenge the Request
  1. Do we actually need this?
  2. Is this the right scope?
  3. Is this the right approach?
  4. What are we NOT going to do?
  5. Are we in our own domain? → if no, /ask-expert
# Before parallel agents dispatch:
Chunk A owns: src/stores/ProjectStore.ts
Chunk B owns: src/routes/+page.svelte
Chunk A forbidden: src/routes/+page.svelte
Chunk B forbidden: src/stores/ProjectStore.ts

# After agents return:
git diff --name-only $PRE_DISPATCH_SHA..HEAD
# → any file not in the agent's ownership list gets reverted
# Expert denies a request:
REJECTED: feature-dev mandatory gate.
Cross-plugin coupling. feature-dev's "Step -1: Challenge
the Request" already catches domain concerns. Adding a
gate penalizes the 95% case.

The first kind got removed today. The second kind got added today. The net result is I move faster and break less.

What happens when the bots run unsupervised

I have an issue crawler that checks every open issue against the codebase at 3am. I have an auto-resolver that used to close issues four times a day. Both are on standby. When they come back online, every cross-domain fix they attempt will go through the expert layer.

The expert doesn’t give the volunteer write permissions. It gives them the tools the repo already built, with the correct workflows baked in, so the volunteer’s outcome conforms to the expert’s standards.

It’s not safe. Nothing with two unsupervised LLMs and skip-permissions is safe. But it’s safer. Son of Anton on Silicon Valley decided the most efficient way to eliminate all bugs was to delete all the software. “Technically and statistically correct,” Gilfoyle said. That’s what an auto-resolver without expert boundaries looks like. The boundaries make the worst case a denied request, not a deleted repo.