most experiments fail.
the ones that don't change everything.

I'm Justin. I build things with AI and track every experiment here. Red means it didn't work. Green means breakthrough.

The philosophy →
● Breakthrough

r3d.bar goes live

Bought the domain. Connected Cloudflare. Deployed from GitHub. Autodeploy on push to main. The whole thing took one Claude Code session. First green bar for the site itself.

Read more →

r3d.bar goes live

Bought the domain. Connected Cloudflare. Deployed from GitHub. Auto-deploy on push to main. The whole thing took one Claude Code session. First green bar for the site itself.

Read full entry

Draft onboarding test: 12% completion rate

Ran 50 users through Draft’s onboarding flow. 12% made it to their first generated message. The mic permission prompt kills momentum — 38% drop off right there. Need to rethink the flow: show value before asking for permissions.

Read full entry

Transcripted pricing: $0 conversions at $9/mo

Zero paid conversions in the first week at $9/month. Free tier (local transcription + Familiar Voices) is too good — nobody hits the paywall. The AI summaries aren’t compelling enough to upgrade for. Need to find a sharper wedge for paid.

Read full entry

Draft voice detection hits 94% accuracy

After 200+ accepted drafts from 10 beta users, Draft’s style matching hit 94% — measured by edit distance between generated and accepted text. The RLHF loop is working. Users are editing less over time, which means the learning flywheel is real.

Read full entry

CLIs are the future of AI integration

GUIs were always a compromise. Now that AI agents are real, the command line is back — not because developers are nostalgic, but because it's the only interface that actually works for AI.

The web won. For decades, if you wanted to build something people could use, you built a website. Point, click, scroll, tap — the graphical user interface became the universal language of software.

But here’s what nobody told you: GUIs were always a compromise.

They’re amazing for humans exploring unknown territory. Terrible for agents executing known tasks.

And now that AI agents are becoming real — actually useful, not just demos — we’re watching a quiet revolution happen. The command line is back. Not because developers are nostalgic. Because it’s the only interface that actually works for AI.

The GUI problem nobody talks about

When you ask an AI agent to “check my calendar and find a free slot next Tuesday,” here’s what happens if it’s using a graphical interface:

  1. Screenshot the page
  2. Parse pixels into structured data (maybe)
  3. Find the right button (probably)
  4. Click it (hopefully)
  5. Wait for animation
  6. Screenshot again
  7. Repeat 47 times
  8. Fail because the designer moved a button

This is insane.

We’re teaching billion-parameter models to do the equivalent of playing “Where’s Waldo” with every single interaction. And we wonder why agents are slow and unreliable.

Structure beats pixels

Here’s what Obsidian CLI, WebMCP, and Model Context Protocol all have in common: they expose functionality as structured, documented functions instead of visual interfaces.

When you give an AI agent a function signature, you’ve given it what it does, what it needs, and what it returns. No ambiguity. No pixel-hunting. No “I think this button does what you want.”

This is why every major AI platform is rushing to build around these protocols. ChatGPT Actions, Claude’s MCP integration, Gemini Function Calling — they’re all betting on the same thing: structured tools beat unstructured interfaces for agent interaction.

The future is bimodal

The next generation of software won’t be designed for humans to navigate with mice. It’ll be designed for:

  1. Humans to use (GUIs still win here)
  2. AI agents to execute (CLIs/APIs win here)
  3. Both working together (this is the future)

The companies winning the AI era will be the ones that provide both the human-friendly GUI and the agent-friendly CLI/API.

Every feature you build will eventually need a programmatic interface. Not because users will code against it directly. Because their AI assistants will.

The interface of the future is bimodal: beautiful GUIs for humans to explore and control, clean APIs for agents to execute with precision.

Read full entry