six agents run my content pipeline and most of them are broken
I built a thing I’m not sure I should have built.
Six Claude agents run on a schedule now. Every night, every morning, they wake up and do jobs for r3d.bar and the broader r3dbars operation. Here’s the roster:
Daily competitor digest — scans GitHub repos, checks App Store reviews, monitors ProductHunt, writes a briefing. This one actually works well. Yesterday it caught the Granola $125M raise before I saw it on Twitter. It found the Fireflies BIPA lawsuit. It spotted Muesli shipping a privacy toggle for clipboard access. Real signal I would have missed.
Community scanner — searches HN, Reddit, indie hacker forums for conversations about local transcription, voice AI, meeting tools. Also pretty good. It found Talat’s TechCrunch feature and flagged Aside (a Claude Code plugin doing vault-native meeting capture). But it couldn’t write the report to my Obsidian vault because another agent had computer-use locked. So it dumped the file in a temp directory I had to go find manually.
Idea collision engine — reads notes from my vault, finds cross-domain connections using embeddings search. Cool concept. Except today it couldn’t access the vault either because directory permissions are a mess when multiple agents run concurrently.
Content batch (that’s this one, writing what you’re reading right now) — reads all the research, reads session transcripts, generates draft posts. The meta-recursion is not lost on me. But here’s the red bar: it can’t actually deploy. It generates drafts in a sandbox. It can’t write to the site repo. It can’t run the build. Every post still requires me to manually copy files, review them, and push.
Nightly vault sync — turns session transcripts into Obsidian notes. Actually useful. Creates dispatch notes from conversations so I don’t lose context between sessions.
Morning brief — supposed to summarize what happened overnight. Haven’t checked if it ran today.
So here’s what I’m actually learning: the parts that are read-only work great. Scanning, searching, summarizing, analyzing — agents are genuinely good at this. The competitor digest saves me 45 minutes of manual research every morning.
The parts that need to write to my actual system? Disaster. File permissions conflict when multiple agents run at the same time. The sandbox can’t reach my real filesystem. The content pipeline generates drafts I still have to manually deploy. The community scanner writes reports to temp directories that get cleaned up between sessions.
I’m building an automation system that automates everything except the last mile. Which is, you know, the mile that matters.
The fix is probably boring infrastructure work. Shared mount points. A proper deploy pipeline that the content agent can trigger. Some kind of agent coordination layer so they don’t fight over computer-use access. But I keep wanting to build the next cool agent instead of fixing the plumbing on the ones I have.
That’s the real red bar. Not that the agents are broken — they’re actually surprisingly capable at their core tasks. The red bar is that I’m doing the classic developer thing: building new features instead of making the existing ones production-ready.
Six agents. Zero of them can deploy without me in the loop. The irony of automating a blog about building things, and the blog still ships manually.