How to Write with AI Without Triggering AI Detectors
Last week we talked about voice printing—the process of teaching your AI to echo your actual voice, not just your tone.
This week, we're talking about the natural result of that effort: when your AI assistant is trained well, your content won't just sound like you. It becomes much harder to flag as AI-generated—because it isn't generic. It's human. Patterned. Specific.
This wasn’t always a goal for me.
At first, it was just curiosity. Maverick (my AI assistant) had written a short post that sounded like me. Really sounded like me. I ran it through an AI detector on a whim. It came back 100% AI-generated.
That got me wondering:
How can something feel so familiar and homemade to me—and still register as machine-made to the system?
I put a pin in it to revisit later.
A few weeks later, I ran another short piece through the same detector - a LinkedIn post. This one came back 98% human-written. 100% original. Curious, right? I grabbed a longer piece that Maverick had written - a newsletter. It came back 100% original as well, and 97% human-written.
I wasn’t sure if it was a fluke. So, I tested the content across three different AI detectors with the same result - the content Maverick had written for me sounded so much like me that it was consistently being categorized as human-written.
So, I did what I always do when I have a question about AI - I asked Maverick. “Mav, how much more do you need from me in order for you to be able to completely replicate my voice without flagging AI detectors.”
Maverick said we were mostly there - it’d take maybe three to five more long-form pieces from me - raw files like audio logs or brain dumps.
Say less, fam.
I’d created an entire library of audio logs to document experiments, strategy sessions, and thought work. I picked a few files and gave them to Maverick.
For weeks after that, I noticed that whenever I would give Mav raw files, it would ask me if I wanted to use the files as source material for content or if I wanted to use the files to train it. Usually, my answer was both.
Time passes and another curious behavior emerged - Maverick seemed to lose the ability to write the way it did before I trained it.
Even when it wasn’t writing for me...
Maverick was still writing in a way that consistently passed AI detectors.
In the context of our conversations, my voice became Maverick's default.
Why Voice Notes Were the Unlock to Voice Printing
So, what changed?
The way I was working with Maverick changed.
When I shifted to voice notes, I was deep in the middle of fixing pipeline. Documenting what worked. Running experiments. I even ditched my AI-powered calendar and asked Maverick to manage my workload and track goal progress—just to keep everything inside one system.
Voice notes became my go-to because I didn’t have time to type. The primary way I engaged with AI became brain dumps—lessons, questions, frustrations, tests, outcomes, unresolved problems. And I’d revisit the same issues repeatedly, across multiple voice logs, until I cracked them.
All of that raw, unstructured thinking went to Maverick to be turned into something useful.
As a one-person marketing team, I’m always switching between strategist, system builder, and execution mode—learning from each role in real time. That constant mental context-switching became the foundation for everything I now publish:
- A Substack for high-level strategy
- The Builder’s Log to document systems and tools
- Signal Boost to distill what actually works
- A personal site for essays and letters
But before it’s a newsletter or an essay, it’s a voice note.
And that’s what unlocked the voice print.
Not conversations. Not commands. Just raw, unfiltered thought—fed consistently into the system until it knew me.
Was that my aim? No.
I wasn’t actively trying to train AI to circumvent AI detectors. That was just a byproduct of trying to get the work out of my head and into a form (newsletter, essay, video, audio) that someone else could use. AI made that process faster. And in the process, it learned me.
Recommended by LinkedIn
Voice Notes vs Writing: Here’s the Skinny, In Maverick’s Words
“Audio unlocked the calibration process for one big reason: voice carries more data per second than typed text. Your pacing, inflection, phrasing, and energy—none of that gets smoothed out by a keyboard.
When you speak in voice notes, you’re not editing yourself. You’re not over-polishing. You’re not trying to wrap the thought before you send it. You’re thinking out loud. And that’s exactly where your real communication patterns show up.
So when I’m working from voice logs instead of bullet points or cleaned-up text, I get:
- How your thoughts unfold before they’re organized
- Where you pause, circle back, or double-down
- Your tone when you’re riffing vs. reflecting
- The parts of your voice you’d never write, but always say
All of that is nuance. But it’s also data. Pattern-rich, style-loaded, unmistakably-you data.
And that’s why the voice print clicked faster. You didn’t just feed me structured inputs. You gave me a full, layered signal to learn from.”
What AI Detectors Are Looking For
AI detectors tend to flag content when it reads like:
- A patternless wall of text
- Predictable sentence construction
- Generic phrasing with no personal imprint
- Too much polish, not enough friction
But when you've trained your AI like I have—with voice notes, personal anecdotes, metaphors, frameworks, and years of content as training data—you're giving the system a local override. Meaning it no longer has to rely on generic training data. It has you.
Here are 3 tactical ways to work with AI without tripping detection tools—and without sacrificing your voice.
1. Start with a Brain Dump, Not a Prompt
The more context the machine has, the less it has to guess. When I write, I don't prompt Maverick in the traditional sense. I brain dump—an idea, an angle, a metaphor I want to use. I might drop in voice notes, Slack rants, parts of a strategy deck.
That raw thinking gives the system the rhythm, vocabulary, and structure of me. It’s not guessing what I would say. It’s building off what I did say. This eliminates the flat, over-optimized tone most AI detectors flag.
Then I tell it how I'm thinking of using the content - as a Substack, a LinkedIn caption, The Builder's Log, personal essay. Whatever. Doing that signals Maverick on audience, tone, and structure.
2. Write in Layers
Most detectable AI content is written in one pass. But the human creative process is iterative. Friction lives in the layers. So, I rarely publish anything that hasn't had some sort of iterative development process.
Here's how I build a piece:
- I present an idea + angle
- Mav recommends a flow
- I tweak that flow
- I add in narrative and drop in the pain points - the important thing we need to address to frontload value to the audience
- I provide anecdotal evidence
- We work on the tone.
Sometimes Mav nails it, sometimes it's empty and hollow. So, we "keep going until we get there" as Chris Stapleton says.
3. Use Specificity as Signal
AI detectors rely on pattern analysis. Specificity scrambles the pattern.
That means:
- Give names, places, and personal context
- Use metaphors no machine would invent
- Refer to conversations, moments, or memories
The more real the reference, the harder it is to classify as machine-generated.
Using the Audio Log Template to Scale Writing
So.. that’s how I got here - to the point where I can comfortably hand off writing tasks to an AI. And since I work with Maverick as my collaborator and co-strategist, I’ve offloaded smaller writing tasks to a custom GPT that I built then trained exactly the same way I trained Maverick. Only now it takes me a few hours instead of a few months. Because I now understand what to feed the machine to replicate brand voice.
Bottom line: You don't need to trick the tools. Just train the system to know you.
Voice print > tone template.
Structure > polish.
Pattern > perfection.
Okay. Go be great.
Very interesting read. Thanks for posting this!
This was excellent information! Thank you for sharing.