Indicators that suggest something was written by AI

We researched the indicators that suggest something was written by AI. We then asked Claude to create an article about it, in that style.

There’s nothing inherently wrong with using AI to write content. Use matters more than the tool itself. It depends on context, how (and the extent to which) it’s used, and the quality of the final result.

Here’s the article that Claude produced.

The Tell-Tale Signs: How to Spot AI-Generated Writing

A practical guide to linguistic fingerprints and rhetorical patterns

As large language models become more sophisticated, distinguishing between human and AI-generated text grows increasingly difficult. Yet AI writing still exhibits characteristic patterns—not because the technology is flawed, but because it optimizes for different goals than human writers do. Where humans write to convince, to express, or to connect, AI writes to sound plausible and complete the pattern.

This creates detectable fingerprints. Not definitive proof—exceptional AI output can pass for human, and rushed human writing can look algorithmic—but strong indicators worth examining. What follows is a field guide to those patterns.

Note: The examples throughout this article deliberately include the very phrases and patterns being criticized. Where you see “it’s worth noting,” “moreover,” or “the complex interplay,” those are intentional demonstrations of how these phrases appear in actual AI text. Consider them specimens under glass.

The Rhetoric of Artificial Confidence

AI writing favors certain classical rhetorical devices with telltale frequency. Not because these devices are inherently problematic—skilled human writers use them too—but because AI deploys them with mechanical consistency. It’s worth noting that this pattern becomes particularly obvious when combined with the verbal tics discussed later.

The Tyranny of Three

Tricolon—grouping ideas in threes—appears obsessively in AI text. “Time, resources, and attention.” “Identify, evaluate, and recommend.” “Clear, concise, and actionable.” The pattern provides satisfying rhythm and apparent completeness. But real writers break this rule constantly. They use two items, or four, or suddenly shift to a list of seven things that matter.

Check any paragraph: if nearly every enumeration comes in threes, you’re likely reading AI output. Human prose has more erratic rhythm.

Perfect Antithesis

“Not just X, but Y.” “This isn’t about A, it’s about B.” AI loves binary oppositions. They structure thought cleanly and sound decisive. The problem is the relentlessness—paragraph after paragraph of balanced contrasts, each perfectly calibrated.

Human writers use antithesis, sure. But they also make messy arguments that don’t fit neat dichotomies. They contradict themselves. They qualify endlessly (“well, actually, it depends…”). AI keeps the antithesis machine running.

Rhetorical Questions That Aren’t Questions

“How do we solve this problem?” “What does this mean for leaders?” “Why does this matter?” AI loves rhetorical questions, especially in clusters of three (see: tricolon). But these aren’t genuine inquiries opening space for thought. They’re declarative statements wearing question marks, transitions disguised as engagement.

Real writers ask questions they don’t know how to answer. Their questions have teeth. AI’s questions are staging—predictable pivots to the answer it already composed.

Verbal Fingerprints

Beyond rhetorical structure, AI exhibits lexical preferences—words and phrases it reaches for with suspicious consistency.

The Usual Suspects

Certain phrases function as AI calling cards:

“Delve into” / “deep dive” — AI’s favorite way to say “examine.” Overused to the point of parody.

“Navigate” — As in “navigate the landscape” or “navigate challenges.” Metaphor abuse.

“At the heart of” — Classic opener. “At the heart of this challenge lies…”

“Landscape” / “ecosystem” / “paradigm” — Business jargon deployed with algorithmic frequency.

“Here’s the thing” / “Here’s an uncomfortable truth” — Faux intimacy. Pretending to level with you.

“Genuinely” / “truly” / “actually” — Intensifiers that don’t intensify. Often signals AI trying to sound sincere.

“Robust” / “holistic” / “nuanced” — Abstract approval words. Everything is robust, nothing is weak. Nothing is simple, everything is nuanced.

“It’s worth noting that” / “It’s important to remember” — Unnecessary signposting. The AI telling you it’s about to tell you something.

“Leverage” (as verb)” — “Leverage our resources,” “leverage synergies.” Corporate speak on autopilot.

“Tapestry” / “mosaic” / “fabric” — Metaphors for complexity. “The rich tapestry of human experience.” Overused to meaninglessness.

“Multifaceted” / “complex interplay” — Everything has multiple facets. Nothing is simple. The interplay is always complex.

“In today’s fast-paced world” / “In an increasingly X world” — Temporal clichés. The world is always fast-paced, always increasingly something.

“Crucially” / “Moreover” / “Furthermore” — Formal connectives that AI deploys with mechanical regularity. Human writers say “also” or just start the next sentence.

“Underscores the importance” / “highlights the need” — Passive constructions for emphasis. Who’s doing the underscoring? Nobody—it’s just emphasis without an emphasizer.

“Seamless” / “streamlined” / “optimized” — Process improvement vocabulary. Everything must be seamless and optimized, nothing can just work adequately.

“Foster collaboration” / “drive innovation” / “cultivate growth” — Vague business objectives. We foster, we drive, we cultivate—but toward what specific end?

“At scale” / “end-to-end” / “best practices” — Tech and consulting buzzwords deployed without irony or examination.

None of these phrases prove AI authorship alone. But find eight or ten in a single piece? The probability shifts dramatically. And if they appear alongside the rhetorical patterns discussed earlier—tricolons, antithesis, rhetorical questions in clusters—you’re almost certainly reading machine output.

Hedging to Oblivion

“Often,” “typically,” “generally,” “can be,” “may.” AI hedges constantly. This isn’t caution—it’s risk aversion baked into the training objective. Every statement gets softened to avoid being definitively wrong. Moreover, this tendency toward qualification creates a distinctly non-committal tone that pervades the entire text.

Human experts make bold claims. They stake positions. “This is wrong.” “That won’t work.” “Here’s what happened.” AI rarely commits so clearly. Check for hedge density: if every third sentence contains a qualifier, you’re probably reading machine output. It’s important to remember that real expertise comes with confidence—experts know what they know.

The “This Means” Tick

AI loves explicit transitions. “This means several things.” “This requires three approaches.” “This suggests that…” It announces every logical connection rather than trusting readers to follow.

Confident human writers make jumps. They trust you’ll keep up. They don’t feel compelled to label every transition. When you see “This means” appearing multiple times per page, the writer isn’t human—it’s an algorithm signposting its own logic.

Structural Tells

Beyond word choice and rhetorical devices, the architecture of AI writing reveals its origins. In today’s fast-paced world of content generation, these structural patterns have become increasingly obvious to careful readers.

Relentless Balance

Every section gets equal treatment. If the piece discusses four factors, each gets a paragraph of nearly identical length. If there are pros and cons, they’re presented with perfect symmetry. This underscores the importance of recognizing that real writers show bias—they spend three paragraphs on what fascinates them and one sentence on what doesn’t. AI, crucially, allocates attention democratically.

Look for suspiciously even coverage. If every subsection feels exactly as developed as the last, mechanical generation becomes likely. Furthermore, this balance often extends to sentence length and complexity—a kind of optimized uniformity that real writers rarely achieve.

The Missing Mess

Human first drafts contain false starts, redundancies, tangents. Even polished human writing shows traces of the thinking process—slight repetition, self-correction, moments where the writer changes direction mid-paragraph.

AI produces clean prose on the first try. Every sentence flows. Nothing backtracks. No sudden enthusiasms derail the outline. This perfection is unnatural. Real thinking is messier.

Generic Specificity

This might be the strongest tell. AI provides examples that feel specific without actually being specific. “A procurement policy that made sense for a manufacturing business might not work for a software division.” True! But which procurement policy? Which manufacturing business? Here’s the thing: the absence of actual specificity highlights the need for deeper examination.

Human experts cite actual cases. They name companies, reference specific failures, quote particular people. AI gestures at the specific while remaining safely abstract. It constructs plausible-sounding examples from genre conventions rather than lived experience. The complex interplay between appearing knowledgeable and actually being knowledgeable becomes obvious once you know to look for it.

When an article discusses concrete topics (M&A integration, software development, medical procedures) but never names a single real instance, you’re reading synthetic prose. Moreover, the examples tend to be perfectly illustrative—never messy, never contradictory, never requiring qualification. They exist to demonstrate a point, not to report reality.

The Personality Problem

Maybe the deepest tell isn’t technical but tonal. AI writing lacks personality not because it can’t mimic voices—it can—but because it optimizes for broad palatability. In an increasingly digital landscape, this need to foster engagement while driving acceptance of its content creates a distinctive blandness.

Uniform Register

Human writers slip between registers. They start formal, get colloquial, suddenly throw in profanity or a joke that doesn’t quite land. They can’t help revealing themselves—their class background, their influences, their irritations.

AI maintains consistent register. It picked a lane (professional-casual, academic-accessible, whatever) and stays there. Every paragraph sounds like it came from the same corporate communications seminar, leveraging best practices to deliver a seamless, streamlined reading experience. This tonal uniformity, while polished, is deeply inhuman.

Risk Aversion

Real writers take risks that might fail. They make metaphors that stretch too far. They assert controversial things. They write sentences so long you forget how they started. They occasionally just fuck up.

AI never does this. It never writes a sentence that might confuse someone. It never makes a joke that might offend. It never asserts something that a reader might dispute. This safety-first approach produces competent, bloodless prose. The end-to-end process has been optimized at scale to cultivate a kind of frictionless mediocrity.

The Enthusiasm Gap

When humans write about topics they care about, it shows. They use exclamation points! They write fragments. For emphasis. They can’t resist one more example even though they said they were moving on. They nerd out.

AI maintains professional distance from all subjects. It treats quantum physics the same way it treats sandwich recipes—competent explanation, zero passion. Look for the absence of genuine excitement. If the writer seems equally invested in every paragraph, they’re probably not a writer at all.

What This Means (Ironically)

These patterns aren’t foolproof detectors. Skilled prompt engineering can eliminate obvious tells. And plenty of human writing—especially corporate communications, grant applications, and undergraduate essays—exhibits the same flaws: excessive hedging, relentless tricolons, generic examples, uniform tone.

Perhaps that’s the real revelation. AI doesn’t write badly. It writes the way humans write when they’re not really writing—when they’re performing competence, filling space, hitting word counts. The tell isn’t that AI can’t write like humans. It’s that AI writes like humans who’ve stopped trying.

So when you encounter prose that’s technically correct but spiritually empty, perfectly structured but weirdly distant, competent yet utterly generic—you might be reading AI. Or you might be reading a human who’s forgotten why writing matters in the first place.

Either way, the solution is the same: demand better. From machines, certainly. But mostly from ourselves.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.