← Back
● Breakthrough

Draft voice detection hits 94% accuracy

After 200+ accepted drafts from 10 beta users, Draft’s style matching hit 94% accuracy — measured by edit distance between the generated text and what the user actually sent.

This is the green bar I’ve been waiting for.

What 94% actually means

Edit distance measures how much you change the AI’s output before sending it. If Draft generates “Hey, wanted to follow up on that” and you send “Hey, following up on that” — small edit, high accuracy. If you rewrite the whole thing, the score tanks.

At 94%, most users are sending Draft’s output with almost no changes. The model has learned how they write.

How we got here

Draft uses a simple RLHF loop: every time you accept a draft, that becomes training signal. Every time you edit heavily or reject, that’s negative signal. After enough accepted drafts, the model stops sounding like Claude and starts sounding like you.

The first 10-20 drafts for any user are generic. By draft 50, it’s getting close. By draft 100, most users stop noticing the difference between their own writing and Draft’s output.

That’s the flywheel. And it’s working.

What this unlocks

The personal voice learning is the moat. Nobody else is building this locally, without cloud, without your data leaving your device. Wispr Flow and Superwhisper are great products — but they don’t learn you.

Draft does. And now we have the numbers to prove it.

Next: ship this to 50 more beta users and see if the accuracy holds at scale. If it does, we have a product worth charging for.