When Build Artifacts Break Cross-Platform Sync
The Setup
I run a three-node Syncthing mesh for development: Mac Mini (macOS), Dev Pi (Debian/ARM64), and a Synology NAS (DSM/ARM). Code written on any machine syncs to the others within seconds. Seamless, elegant, fast.
Until it wasn’t.
The Symptoms
Files created on Dev Pi weren’t appearing on Mac Mini. The NAS was supposed to be in the middle - Dev Pi syncs to NAS, NAS syncs to Mac - but something in that chain was broken.
Syncthing’s web UI showed the folders were “Up to Date” on all three nodes. But ls told a different story: files I’d created hours ago on Dev Pi just weren’t on the other machines.
Three Root Causes
What looked like one sync failure was actually three separate issues stacked on each other. I only discovered this through sequential debugging - fix one, hit the next wall, repeat. Ten-plus iterations before I had the full picture.
1. NAS Admin Account Expired
Synology DSM had locked the admin account due to inactivity. Syncthing was running but couldn’t actually write to the filesystem.
Symptom: Syncthing showed connected but files weren’t appearing. Fix: Re-enable admin account in DSM Control Panel.
2. fsWatcher Broken on Synology Package
Syncthing’s file system watcher (fsWatcher) uses inotify to detect changes in real-time. The Synology package runs in a sandboxed environment that can’t access inotify properly.
Symptom: Files uploaded to NAS wouldn’t sync until the next scheduled rescan. Fix: Change rescan interval from 3600 seconds to 60 seconds. Not ideal (polling instead of watching), but good enough for my workflow.
# Check via Syncthing API
curl -s "http://192.168.3.5:8384/rest/db/status?folder=2-project-files" \
-H "X-API-Key: [key]" | jq '.pullErrors'
3. 2792 Permission Errors on Build Artifacts
This was the big one. Syncthing’s error log was flooded with permission errors. All 2792 of them pointed to files inside .venv directories.
Python virtual environments contain platform-specific compiled binaries. A .venv created on Mac (x86_64/ARM64 macOS binaries) syncing to Dev Pi (ARM64 Linux binaries) is fundamentally broken - the platforms are incompatible at the binary level.
Same issue with node_modules. Native Node addons compiled on Mac don’t run on Linux ARM.
The Solution: .stignore
Syncthing’s .stignore file works like .gitignore - patterns for files that shouldn’t sync.
# /volume1/NFSS/2_project-files/.stignore
# Build artifacts - platform specific, regenerate per machine
**/.venv
**/node_modules
**/__pycache__
*.pyc
This excludes build artifacts from sync in both directions. Each machine maintains its own .venv and node_modules. Source code and lockfiles (requirements.txt, package.json, package-lock.json) still sync.
The workflow becomes:
- Write code on Mac, commit, sync
- Switch to Dev Pi, pull synced source
- Run
uv syncorbun installto regenerate local artifacts - Local artifacts stay local
The lockfiles ensure identical dependency versions even though the actual installed binaries differ per platform.
Why Not Other Approaches?
Platform-specific sync folders: Could have separate 2_project-files-mac/ and 2_project-files-linux/ directories. Rejected because it breaks the mental model of one workspace and requires duplicating source code.
Post-sync cleanup scripts: Syncthing supports hooks. Could delete .venv after sync. Rejected because it’s fighting the tool rather than using it correctly.
Just sync everything: I tried this first. It worked until it didn’t - some projects with large native dependencies would cause sync conflicts or slow syncs. The permission errors were just the most visible symptom.
Validation
After implementing .stignore, I tested the workflow:
- Create new Python project on Dev Pi with
uv venv - Wait 60 seconds (NAS rescan interval)
- Check Mac Mini - source files appear,
.venvdoesn’t - Run
uv syncon Mac - creates local.venvfromrequirements.txt
Both machines have working environments with identical dependencies but platform-appropriate binaries. No sync conflicts, no permission errors, no broken workflows.
The 60-Second Rescan Tradeoff
I accepted the 60-second rescan interval rather than fighting the fsWatcher issue. Alternatives existed:
- Syncthing in Docker on NAS: Would have working fsWatcher but adds complexity
- Different sync solution: Defeats the purpose of the existing mesh
- Live with 3600s default: Too slow for active development
60 seconds is tolerable. I work, files sync within a minute, I move to another machine. Not instant, but predictable.
Lessons
1. Stack your diagnosis before your fixes. I wasted time fixing the admin account before understanding the full problem. If I’d diagnosed all three issues first, I could have addressed them systematically instead of repeatedly hitting new walls.
2. Build artifacts are platform-specific by design. Virtual environments, node_modules, compiled extensions - these are explicitly not portable across platforms. Treating them as syncable source is a category error.
3. .stignore follows .gitignore patterns for a reason. The files you don’t commit are often the files you shouldn’t sync. Start with a .stignore that mirrors your .gitignore and adjust from there.
4. Accept good-enough workarounds. The fsWatcher issue has a “proper” fix (Docker, different package), but the polling workaround works. Shipping beats perfect.
This post documents a real debugging session from January 2026. The sequential discovery process took 10+ iterations before all three root causes were identified.
Written with Claude.