Your Security Tools Only Protect What They Can See
The Comfortable Assumption
I run CrowdSec on my Infrastructure Pi. It watches my logs, detects attack patterns, and pushes ban decisions to my router’s firewall. There are currently about 15,000 IPs blocked at the network edge. I felt pretty good about my security posture.
Then I ran a routine log inspection and found SSH brute force attacks that CrowdSec had completely missed.
The Discovery
I have a custom command /a-log-inspector that analyzes centralized logs for anomalies. Running it as ad-hoc maintenance, I noticed something odd in the CrowdSec metrics:
sudo cscli metrics show acquisition
The output showed acquisition sources with their parse rates. Most were healthy - auth logs at 94%, Caddy at 100%, local syslog working fine. But one category caught my eye: SSH-related patterns on the Infrastructure Pi showed 0% detection.
Zero percent. On a public-facing SSH server.
The Root Cause
The Infrastructure Pi runs DietPi, which uses Dropbear instead of OpenSSH by default. I’d never questioned it - SSH worked, seemed secure enough, and Dropbear uses less memory.
Here’s the problem: Dropbear logs to journald, not to /var/log/auth.log.
CrowdSec’s acquisition configuration was set up to monitor file-based auth logs:
# /etc/crowdsec/acquis.d/setup.sshd.yaml
filenames:
- /var/log/auth.log
labels:
type: syslog
This works perfectly for OpenSSH, which writes to auth.log. Dropbear writes to the systemd journal. CrowdSec was faithfully monitoring an auth.log file that never received Dropbear events.
To verify:
# This shows Dropbear auth events
journalctl -u dropbear.service | grep -i "failed\|invalid"
# This shows nothing from Dropbear
grep -i dropbear /var/log/auth.log
SSH attacks against the Infrastructure Pi had been completely invisible to CrowdSec.
The Fix
CrowdSec can monitor journald directly. The fix was adding a new acquisition source:
# /etc/crowdsec/acquis.d/dropbear-journald.yaml
source: journalctl
journalctl_filter:
- "_SYSTEMD_UNIT=dropbear.service"
labels:
type: syslog
After reloading CrowdSec:
sudo systemctl reload crowdsec
sudo cscli metrics show acquisition
Now I see Dropbear events being parsed. The same SSH brute force patterns that work against OpenSSH work against Dropbear - failed auth attempts, invalid users, repeated connection attempts from the same IP.
Alternatives Considered
I briefly considered three other approaches:
Configure Dropbear to log to a file - Possible but requires custom systemd configuration. More moving parts.
Replace Dropbear with OpenSSH - Would solve the problem but means changing something that works. Plus OpenSSH uses more memory, and the Infrastructure Pi is already at 24% RAM.
Forward journald to rsyslog to a file - Doable but adds latency and complexity.
The journald acquisition was the path of least change. CrowdSec already supports it natively; I just wasn’t using it.
The Broader Lesson
After fixing the Dropbear gap, I audited all services on the Infrastructure Pi:
| Service | Logs to | CrowdSec Watching |
|---|---|---|
| Dropbear SSH | journald | ✅ (after fix) |
| Pi-hole FTL | file | ✅ |
| Caddy | file | ✅ |
| Home Assistant | file | N/A (no parser) |
| rsyslog | file | ✅ |
The audit revealed this was the only gap. But it made me realize I’d been operating on assumption - “CrowdSec is running, so SSH is protected” - without verifying the full detection chain.
What Changed
I updated my /a-security-audit command to include acquisition coverage checking. Now when I run security audits, it verifies that expected log sources are actually being parsed:
# Part of automated security audit
sudo cscli metrics show acquisition | grep -E "parsed|unparsed"
If any critical service shows 0% parse rate, that’s a HIGH finding.
The Meta-Lesson
This wasn’t a CrowdSec bug. CrowdSec did exactly what it was configured to do. The problem was my configuration assumed all SSH implementations log the same way.
Security tools only protect what they can see. If your SIEM, IDS, or threat detection platform isn’t receiving events from a particular source, those events don’t exist as far as your security posture is concerned.
Audit your log paths. Not once during setup - periodically. Services change, distributions differ, defaults vary. The only way to know your security tools see what you expect is to verify.
This gap was discovered during a routine audit in January 2026. The fix took five minutes; finding the gap took months of assuming it wasn’t there.
Written with Claude.