How to Make AI Writing Undetectable? A Speech-to-Text Workflow That Works

Make AI writing undetectable with a speech-to-text workflow: speak your draft, use AI to restructure only, add human quirks and run detectors before publishing

Spread the love
Bikers rights gif via giphy
"bikers rights" portlandia gif via giphy

TL;DR

Raw speech-to-text scored 100% human across every detector I tested, which makes it the best starting point if detection is a concern.

If you tell ChatGPT to organize without rewriting your words, detection scores stay almost perfect (only mild movement in one tool).

The moment AI adds new paragraphs, stricter detectors (especially GPTZero) start lighting up, even if you try to match your tone.

Starting with genuinely human input beats word-swapping tricks and paraphrasers, because detectors key off deeper patterns than synonyms.

Is it possible to just use speech to text software, talk about a topic, and then throw that on ChatGPT, have it restructure it and it could still register as human written content and not AI content? That’s the exact question I set out to answer. 🤔 And honestly, the results surprised even me. 👇

I ran a full experiment with the same piece of content, tested across five different AI detectors, through three rounds of increasing AI involvement. If you’ve been stressing about how to make AI writing undetectable without turning your whole workflow upside down, this is gonna be your favorite read today. 👍

https://www.youtube.com/watch?v=

The Experiment Setup (And Why I Bothered)

So here’s what I did. I basically just ranted into a speech-to-text tool about AI and SEO. Just talked. No script, no outline, just me going off about a topic I know well. The result was a messy, unformatted block of text with all the grammatical hiccups and weird phrasing you’d expect from someone literally just talking.

Ai test
Experimenting with ai detection tools

And that’s the whole point.

Most people trying to avoid AI detection are approaching it completely backwards. They write something in ChatGPT, then try to “humanize” it after the fact by swapping words, running it through paraphrasing tools, adding typos manually. That’s a losing game and I’ll tell you why.

The detectors are getting smarter and the pattern of AI-generated text runs deeper than individual word choices; it’s baked into sentence structure, paragraph rhythm, how ideas connect to each other, all of it.

My approach flips it. Start human, stay human, and only use AI as a formatting assistant.

Round 1: Raw Speech-to-Text vs. Every Major Detector

I took my raw, unedited speech-to-text transcription with ugly formatting and run-on sentences and the whole mess, and fed it into five AI detection tools:

DetectorHuman ScoreAI Score
GPTZero100% Human0% AI
SurgeGraph98% Human2% AI
NoteGPT100% Human0% AI
QuillBot100% Human0% AI
Grammarly100% Human0% AI
Round 1 results — raw speech-to-text content, zero AI involvement.

Clean sweep. Every single detector said this was human-written content. And yeah, “human written” with asterisks because I did use speech to text, so I basically ranted a little blog post about AI and SEO and it’s gonna have all of those grammatical issues and stuff like that. That’s probably exactly why it scored so well.

The natural imperfections of human speech, the false starts, the slightly awkward phrasing, sentences that run a little too long and then circle back on themselves, those are basically a fingerprint that AI detectors recognize as authentically human. AI text is too clean. Too balanced. Too perfect.

Info icon.

Worth knowing

A 2025 study found that experienced human annotators misclassified only around 3 out of 300 articles when identifying AI content, which is significantly better than most of the automated detection tools people worry about.

Round 2: ChatGPT Restructures (But Doesn’t Rewrite)

This is where it gets interesting. I took that same raw content and threw it into ChatGPT with a very specific prompt. All I told it was to take the following content and structure it in a good way and not to change any of the content. That’s it. No rewriting, no “make this better,” no “expand on these ideas.” Just organize what’s already there.

One thing I was really careful about here is I didn’t copy any of the AI-generated headings or structural formatting like “Introduction:” or perfectly labeled sections. Those are the most obvious telltale signs that something was written by AI because that’s the most obvious thing ever; it’s in the structuring.

I just grabbed the body text.

The results?

DetectorHuman ScoreAI Score
GPTZero100% Human0% AI
SurgeGraph90% Human10% AI
NoteGPT100% Human0% AI
QuillBot100% Human0% AI
Grammarly100% Human0% AI
Round 2 results — same content, restructured by ChatGPT without word changes.

Still basically passing everywhere. SurgeGraph picked up a tiny 10% AI signal which makes sense because the structure itself carries some AI fingerprinting. But 90% human? I’ll take that all day long.

The words didn’t change. The ideas didn’t change. The tone didn’t change. ChatGPT just moved paragraphs around and organized the flow. And that was enough to keep the content reading as human-written across almost every detector on the market.

Success icon.

This is the sweet spot

Using ChatGPT for structure only, not content generation, preserves your human voice while making the post publishable. Detectors tend to stay calm when your words are genuinely yours.

Round 3: Letting AI Generate New Content (Where Things Get Spicy)

For the final round, I pushed it further. I told ChatGPT to take all the points that I wrote about, add a more robust blog structure to my content, keep my tone of voice, but add structure and more content. Basically I let the AI actually create new paragraphs and expand on my ideas.

This produced a much longer post. And I was careful about it too. I almost made a massive mistake by copying everything including the AI-generated headers. I stripped out the word “Introduction” because I don’t want it to be too obvious that it is so structured like that.

I did keep a concluding header but changed it to “Final Thoughts” because that’s typically what I write.

Even with those precautions, here’s what happened:

DetectorHuman ScoreAI Score
GPTZeroLow — Flagged as AIHigh confidence AI
SurgeGraph96% Human4% AI
NoteGPT91% Human9% AI
QuillBot89% Human11% AI
GrammarlyN/A (required sign-in)N/A
Round 3 results — AI-expanded content with tone matching.

We did step on a few landmines that something was AI generated. GPTZero went full alarm bells, highlighting specific sections it was confident were AI-written. The other tools were more forgiving though. SurgeGraph stayed at 96% human, which is honestly impressive given how much new AI-generated content was in there.

But the pattern is crystal clear. The more you let AI generate, the more detectors catch it. Even when you tell it to match your tone. That instruction helps, but it doesn’t fool the stricter tools.

Warning callout icon.

Caution

GPTZero has a notably stricter threshold for AI detection than most tools. If a client or publisher uses GPTZero specifically, treat Round 3-style expansion as higher risk.

The Grammatical Error Trick

Here’s a bonus tip that came out of this experiment. I can probably get away with it even further if you tell the AI to add a few grammatical errors here and there because AI wants to mitigate that as much as possible. So when an AI detector sees some grammatical errors, it’s much more likely they’ll think a human actually wrote that.

It makes total sense when you think about it. AI models are trained to produce clean, grammatically correct output. That’s literally their job. So when a detector scans text and finds consistent grammatical perfection across every sentence, that itself is a signal.

Human writing has rough edges. We dangle prepositions. We start sentences with “And” or “But.” We write fragments on purpose. Sometimes we don’t even finish a thought before moving on to the next one.

So telling ChatGPT to intentionally sprinkle in some imperfections? That’s actually a legitimate strategy. Not errors that make you look like you don’t know what you’re doing, but the kind of stylistic choices that real writers make all the time.

Slightly informal phrasing, a comma splice here and there, maybe a sentence that technically should be two sentences but reads better as one long run-on.

Why This Works (The Actual Science)

AI detectors work by analyzing what researchers call “token-level statistics,” which is basically how predictable your word choices are. AI text tends to follow statistically probable patterns because that’s how language models generate text: they select tokens with high probability over and over, which results in what researchers describe as low “perplexity” and low “burstiness”.

Human speech doesn’t work that way at all. When you’re talking into a mic, you pause, you backtrack, you use weird word combinations that no language model would predict. You might say something slightly redundant or trail off on a tangent before circling back.

Those “imperfections” are actually what make your content statistically unique and harder for detectors to flag.

According to Grammarly’s own research, detection tools can only assess the likelihood of AI involvement; they can’t prove it. They’re probability engines, not truth machines. And when your base content is genuinely human-originated, the probability math works in your favor even after AI restructuring.

The origin of your content is everything. You can’t polish AI slop into human writing, but you can absolutely use AI to polish human writing without losing its authenticity.

The Workflow You Should Steal

The Speech-to-Text AI Content Workflow

  1. Talk it out — Open any speech-to-text tool and just rant about your topic for 5–10 minutes. Don’t worry about structure, grammar, or flow. Just get your knowledge and opinions out there.
  2. Restructure only — Paste your transcription into ChatGPT and tell it to organize and structure the content without changing your words. Remove any obvious AI formatting labels it adds like “Introduction:” or overly neat section headers.
  3. Expand carefully — If you need more content, ask ChatGPT to add supporting points while keeping your tone. But know that this is where detection risk goes up, so proceed with your eyes open.
  4. Add imperfections — Tell the AI to include a few grammatical quirks. Or better yet, go through it yourself and rough it up a little. Break a grammar rule or two on purpose.
  5. Test before publishing — Run your final draft through at least 2–3 detectors. I’d recommend QuillBot’s free detector and GPTZero as your baseline since GPTZero is the most aggressive.
Do you ever feel frustrated trying to outsmart ai detection tools; wondering if your content as human-written, or is it just me? 😩
Do you ever feel frustrated trying to outsmart ai detection tools; wondering if your content as human-written, or is it just me? 😩

Frequently Asked Questions

Google’s official stance is that they care about content quality, not content origin. But their helpful content system is designed to reward content written for people, and mass-produced AI content often fails that test.

The real risk isn’t a direct AI penalty. It’s producing content that reads like every other AI-generated article on the same topic, which won’t rank well regardless of how it was made.

Absolutely, and it’s actually faster than most people think. Five minutes of talking can produce 800–1,000 words of raw content. The restructuring step in ChatGPT takes maybe two minutes.

You end up with a solid draft in under ten minutes that you genuinely wrote and AI just helped you organize your thoughts. That’s a pretty good deal.

Honestly, it doesn’t matter much. Google Docs has a built-in voice typing feature that works fine. Otter.ai is popular. Even your phone’s dictation works. The tool isn’t the point; talking instead of typing is the point. The natural cadence of speech is what makes this whole thing work.

Probably, yeah. The fundamental principle here isn’t about exploiting a flaw in current detectors. It’s about creating content that is genuinely human in origin. As long as detectors are looking for AI-generated patterns, and that’s likely always going to be a core part of how they function, content that originates from human speech will have a natural advantage.

You’re not tricking the detector. You’re giving it exactly what it’s looking for.

This actually came up during my testing. If you don’t have QuillBot’s Chrome extension installed, I really recommend it because it goes well with your Grammarly extension. Grammarly catches a lot of stuff, but there’s some things that Grammarly misses and QuillBot actually picks them up.

It’s a really good companion. Running both is a solid move for cleaning up speech-to-text content without over-polishing it into AI-sounding territory.

Final Thoughts

So can you use speech to text to create content at scale and use AI to shape it up and it’ll still register as human-written content? Based on this experiment, I think the answer is yes. The key insight isn’t some clever prompt or a magic paraphrasing tool; it’s that the source of the content matters more than anything you do to it afterward.

Start with your voice, literally, and AI detectors are gonna have a much harder time flagging your work.

If you’re a content creator or SEO who’s been playing whack-a-mole with AI detectors, stop trying to disguise AI output and start using AI differently. Talk first, structure second, and let ChatGPT be your editor not your writer. That’s the workflow that actually holds up under scrutiny, and I’ve got the receipts to prove it.

Leave a Comment

Frequently asked questions (FAQ)

LiaisonLabs is your local partner for SEO & digital marketing services in Mount Vernon, Washington. Here are some answers to the most frequently asked questions about our SEO services.

SEO (Search Engine Optimization) is the process of improving your website's visibility in search engines like Google. When potential customers in Mount Vernon or Skagit County search for your products or services, SEO helps your business appear at the top of search results. This drives more qualified traffic to your website—people who are actively looking for what you offer. For local businesses, effective SEO means more phone calls, more foot traffic, and more revenue without paying for every click like traditional advertising.

A local SEO partner understands the unique market dynamics of Skagit Valley and the Pacific Northwest. We know the seasonal patterns that affect local businesses, from tulip festival tourism to agricultural cycles. Local expertise means we understand which keywords your neighbors are searching, which directories matter for your industry, and how to position your business against local competitors. Plus, we're available for in-person meetings and truly invested in the success of our Mount Vernon business community.

SEO is a long-term investment, and most businesses begin seeing meaningful results within 3 to 6 months. Some quick wins—like optimizing your Google Business Profile or fixing technical issues—can show improvements within weeks. However, building sustainable rankings that drive consistent traffic takes time. The good news? Unlike paid advertising that stops the moment you stop paying, SEO results compound over time. The work we do today continues delivering value for months and years to come.

SEO pricing varies based on your goals, competition, and current website health. Local SEO packages for small businesses typically range from $500 to $2,500 per month, while more comprehensive campaigns for competitive industries may require a larger investment. We offer customized proposals based on a thorough audit of your website and competitive landscape. During your free consultation, we'll discuss your budget and create a strategy that delivers measurable ROI—because effective SEO should pay for itself through increased revenue.

Both aim to improve search visibility, but the focus differs significantly. Local SEO targets customers in a specific geographic area—like Mount Vernon, Burlington, Anacortes, or greater Skagit County. It emphasizes Google Business Profile optimization, local citations, reviews, and location-based keywords. Traditional SEO focuses on broader, often national rankings and prioritizes content marketing, backlink building, and technical optimization. Most Mount Vernon businesses benefit from a local-first strategy, though many of our clients combine both approaches to capture customers at every stage of their search journey.

Absolutely! SEO and paid advertising work best as complementary strategies. Google Ads deliver immediate visibility and are great for testing keywords and driving quick traffic. SEO builds sustainable, long-term visibility that doesn't require ongoing ad spend. Together, they create a powerful combination—ads capture immediate demand while SEO builds your organic presence over time. Many of our Mount Vernon clients find that strong SEO actually improves their ad performance by increasing Quality Scores and reducing cost-per-click, ultimately lowering their total marketing costs while increasing results.