The realisation
Two months ago I was reviewing a batch of AI-generated outbound emails — sequences built for a client using a standard AI-assisted workflow. Technically good. Correct grammar. Clear structure. Relevant industry references. Proper personalisation fields populated.
And they felt completely hollow. Not bad. Not wrong. Just hollow. Like they could have been written for any company in that industry, by anyone who had read a few articles about it, without ever having worked in it.
The emails that were performing — 20%+ open rates, 8%+ reply rates — all had one thing in common. A specific, lived observation from someone who had actually done the work. That realisation changed everything about how I use AI for outbound.
AI generates language by identifying patterns in training data and producing output that matches those patterns. When you ask AI to write a cold email for a B2B SaaS company targeting VP of Sales, it produces language that matches the pattern of good cold emails for that audience.
The problem: every other AI tool given the same prompt produces language that matches the same pattern.
The Core Problem
Generic is not a quality problem.
It is a pattern recognition problem.
AI produces language that is correct and appropriate — but correct and appropriate is the minimum threshold, not the differentiator. Your prospect's inbox is full of emails that match that pattern. Yours is one more. What differentiates is specificity. And the most powerful specificity in B2B outbound is the lived experience of someone who has actually solved the problem your prospect is facing.
In 10 years of running outbound for B2B companies, the emails that get replies share one quality more than any other: they demonstrate genuine understanding of the prospect's situation from the inside.
Not demographic understanding — "you are a VP of Sales at a Series B SaaS company." Inside understanding — "I know what it feels like to be three months into a new sales role with a pipeline target that the current outbound motion is not going to hit." That kind of understanding can only come from experience. It cannot be generated from a prompt. It can be expressed through AI — but it has to originate from a human who has actually been in that situation.
Before we use AI to generate any outbound copy, we build a library of real stories from the founder or senior team — specific experiences, real client situations, genuine moments of insight — that can be drawn on to humanise AI-generated content.
A founder running a B2B outbound agency has a story about a client who paid well for 10 months, got measurable results, and still churned — because trust had eroded early in the engagement when the founder overstated their expertise.
That story has multiple extractions — each one a fragment that can humanise a cold email:
2026
Three years ago, a well-structured AI-generated email stood out because most outbound was manually written and inconsistent. Today, most outbound is AI-assisted and structurally competent. The floor has risen. Standing out now requires not just competent structure but genuine content. The bar has moved from "does this email follow best practices" to "does this email contain something only this person could have written." The founders and agencies that build personal narrative repositories now will have a compounding advantage that purely AI-dependent competitors cannot close — because the experience cannot be generated. It has to be lived.