11 min read
How To Make AI Not Sound Cringey
Nate Whittemore (a fellow Rhinebeckian) recently shared his POV along with five techniques for breaking "AI's tyranny of the average" on his AI Daily Brief podcast. His framework aligns nicely to how we've operationalized AI at Moving Parade, where acceleration only works if the output is actually as good as (or better) than what a human would produce. Here's his playbook, plus a few more techniques we use all the time.
The Sameness Problem
AI is trained on everything humans have ever output. That makes it optimized for average by design. As you’ve probably noticed, there is a high floor. The content and analysis is passable, but the ceiling caps out pretty consistently.
Average is fine when you're messing around. It's not fine when you're shipping.
That’s why AI, as tech writer Alex Kantrowitz puts it, has a "AI's Sameness Problem": generative AI produces "the average of averages," seeking to minimize distance from the mean. The result? Every Sora video starts to look the same. Every AI image has that telltale quality. Every business email sounds like it came from the same agency.
This doesn’t really help us. Generic content doesn't land. Generic strategy doesn't differentiate. Generic anything wastes the tool's leverage.
We use AI extensively. It's baked into how we plan campaigns, strategy, build decks, and analyze performance.
We've gotten quite adept at building guardrails to keep it from sounding like everyone else. Here we are sharing Nate’s techniques as well as a few we use all the time.
Eight Techniques to Break the Average
1) Negative Style Guide
What it is: Tell the AI what not to do. Ban specific words, phrases, and structures that make output feel generic.
In Nate’s example he talks about how the word “telemetry” keeps popping up in outputs. Totally a good word if you are trying to sound smart - but not something you’d use every day.
Why it works: AI defaults to patterns it's seen most often. A negative style guide forces it off those defaults.
How we use it: Our anti-bot checklist explicitly bans cliched phrases like "unlock growth," "in today's landscape," "best-in-class," and "at the intersection of." We also flag structural tics like the rule-of-three for its own sake, hedging intros, summary padding and big formatting giveaways like (the f’ing) em dash and arbitrary leading spaces at the beginning of paragraphs.
Move: Build a 5-item "never use" list for your domain. Add to it every time you spot a phrase that makes you cringe.
2) Force Divergent Choice
What it is: Don't let AI hedge. Make it commit to a position, pick a path, or recommend one option. NO "on one hand… on the other hand" equivocation.
Nate's insight: AI is trained to be balanced and inoffensive, which means it defaults to showing multiple sides. That's useful for exploration but useless for decision-making.
Why it works: Operators need answers, not options. Forcing commitment surfaces the AI's actual reasoning instead of a laundry list.
How we use it: When we're building campaign plans or messaging hierarchies, we add "Pick one. Defend it. No maybes." to the prompt. If we want to see alternatives, we run the prompt multiple times with different constraints. Not just one prompt with five hedged answers.
Move: Add "no equivocation andcommit to one recommendation" to your next strategic prompt.
3) Cliché Burn Down
What it is: Ask the AI to identify the most common clichés, analogies, or terms of phrase in its own output and then rewrite to avoid them.
If you did nothing else on this list except run "What are the most common clichés this fell prey to, and how could you change it?" after every first draft, your output would be better than generic LLM writing.
Why it works: AI has pattern awareness baked in. Asking it to surface those patterns and then break them, adds a self-correction loop that most people just skip.
How we use it: We run this as a second pass on everything that ships externally, like blog posts, pitch decks, client presentations. The hit rate is high. Even drafts that feel clean often have a few generic phrases baked in.
Move: After your next AI draft, paste it back in and ask: "List the 5 most generic or clichéd phrases in this. Rewrite to remove them."
4) Self Critique (+ Model Switching)
What it is: Have the AI draft something, red team itself, then produce a v2 that fixes the issues.
Nate's process: "Draft a first version. Red team it and list the top five ways it's generic. Rewrite a v2 that fixes each issue. Then explain why you changed what you changed. Nate mentioned he uses GPT-4o and o3 mid-conversation, feeding one model's output to another for critique. o3 is more clinical and list-driven; GPT-4o is more conversational. Each surfaces blind spots the other misses.
Why it works: First-pass output is almost never the best version. Critique forces the LLM to iterate. Cross model critique adds dimensionality because different models see the problem differently.
How we use it: For high-stakes deliverables (client decks, strategic briefs), we run draft → critique → rewrite as a standard loop. We also swap models when we're stuck, e.g. ChatGPT for structure and clarity, Claude for creative exploration, then back to ChatGPT to tighten.
Move: Build "draft → self-critique → v2" into your workflow. If you have access to multiple models, use one to draft and another to critique.
5) Use Examples + Explain Why Consensus Is Wrong
What it is: If you have an example of output that's better than average, show it to the AI, then also explain why it's better and what conventional wisdom it breaks.
Nate's uses the example of a pitch deck. Standard advice says to follow a 10-slide template That's fine as a starting point, but decks that stand out rarely follow it. Why? Because whatever is most special about your story should be as close to the front as possible and not buried on slide 6 where the template says "traction" belongs.
If you just share a deck with numbers upfront, the AI might think you always want numbers upfront. If you explain the principle—"lead with what's most differentiated"—it understands the logic and can apply it to other contexts.
Why it works: AI pattern-matches. Without explanation, it learns the surface pattern (numbers upfront) instead of the underlying principle (differentiation upfront).
How we use it: When we build voice guidelines or content templates for clients, we don't just share examples. We write the principle behind each choice. "This headline works because it's a specific claim, not a vague promise." "This structure works because it front-loads proof instead of burying it.
Move: For any template you break, write a 2-line principle explaining why the conventional approach is limited and what you're optimizing for instead.
Here is a few more techniques that we use, largely to force precision where AI most wants to stay vague.
6) Inject Constraint or Controversy
What it is: Give AI a hard limit ("cut this to 100 words") or force a contrarian stance ("argue why the conventional approach fails"). Constraints breed creativity. Safe answers breed sameness.
Why it works: Without constraints, AI optimizes for comprehensiveness. This often means bloat. A tight constraint forces prioritization. You get signal, not filler. Controversy works the same way: it forces the AI to take a position instead of hedging toward the safest middle ground.
How we use it: When drafting LinkedIn posts, we add "300 words max, no fluff" to the prompt. For strategy briefs, we sometimes flip the frame: "Explain why the industry-standard approach to XYZ misses the point."
Move: Next time you get bloated output, add "Cut this by 50% without losing the core argument." Or try: "Take a contrarian stance. Tell me why the standard advice is wrong?"
7) Force Iterative Improvement with a Target
What it is: Don't ask for "better." Ask for "25% sharper" and make the AI explain why each change improves it. You get the edit and a mini-audit of what was weak.
Why it works: "Make this better" is vague. AI doesn't know what dimension to optimize for, so it guesses…usually wrong. A quantified target ("25% sharper," "30% more concise") forces intentionality. Asking for an explanation surfaces the reasoning, which you can then apply to future drafts.
How we use it: After a first pass on campaign messaging or deck copy, we'll prompt: "Rewrite this to be 30% more specific. List what you changed and why it's sharper." The resulting output is almost always tighter, and the explanation teaches us what the first draft was missing.
Move: After your next draft, try: "Rewrite this to be 20% more direct. Explain what you removed and why it was weak."
8) Version Against a Target Reader
What it is: Rewrite the same piece for three different audiences—CFO, CMO, skeptical operator. Compare the outputs. Keep the sharpest version. Writing for "everyone" flattens voice.
Why it works: AI defaults to a generic audience because it's been trained on everything. Forcing specificity who is reading this, what do they care about, what skepticism do they bring, sharpens the argument. The CFO version emphasizes ROI and risk. The CMO version leads with positioning and competitive differentiation. The skeptical operator version cuts preamble and front-loads proof.
One of those three will be stronger than the "general audience" version every time.
How we use it: When we're drafting high-stakes client materials, we version the same brief for three reader types. The exercise surfaces where the logic is weak, where proof is missing, and which frame lands hardest. We don't always ship the specialized version, but testing against it makes the final draft stronger.
Move: Take your next piece of strategic writing and prompt: "Rewrite this for a skeptical CFO who has seen 100 pitches this quarter. What changes?" Then compare it to your original.
What This Unlocks
These eight techniques do something important: they let you use AI for acceleration without sacrificing quality. You get speed and differentiation, not speed or differentiation.
The practical unlocks:
You don't sound like everyone else using AI. Your content, decks, and briefs stay distinct.
You compress iteration cycles. Self-critique loops, model-switching, and targeted improvement let you test 5–10 versions in the time it used to take to draft one.
You build taste into the tool. Negative style guides, principle-based examples, and audience-specific versioning turn AI into an extension of your judgment, not a replacement for it.
This matters more as adoption spreads. Right now, there's leverage in using AI better than the median user. That window won't last forever…but it's wide open now.
Where We Apply This
We use these checks everywhere AI touches our work: content creation (blogs, LinkedIn, decks), strategic briefs, campaign plans, and QA loops. The goal isn't to avoid AI. It's to avoid mediocre AI.
At Moving Parade, we say we're "human-led, AI-powered." That means AI accelerates the work, but humans set the judgment bar. Nate's five techniques are a practical map for holding that line. The three we've added push specificity where AI defaults to vague.
Full credit to Nate Whittemore for the framework. You can find his full breakdown in the AI Daily Brief podcast—episode titled "How to Make Your LLM Not Average." His read of the problem (and the solutions) is sharp, and it's worth the 20 minutes.
