Why does AI writing sound generic despite being grammatically correct?
Explores whether the robotic quality of AI text stems from grammatical failures or rhetorical ones. Understanding this distinction matters for diagnosing what AI systems actually struggle with in human-like writing.
Post angle for Medium / LinkedIn
The popular complaint about AI writing is that it sounds generic or robotic. But the metadiscursive noun research gives that complaint a precise mechanism: the gap isn't grammatical, it's rhetorical.
Grammar is the system of structural rules that makes sentences well-formed. Rhetoric is the art of making arguments persuasive — and in academic writing, that means taking evaluative stances. Claiming something is a strong finding (not just a finding). Positioning evidence as evidential (not just informational). Marking an argument as an argument (not just an explanation).
ChatGPT prefers manner nouns: method, approach, process. These are descriptively precise but rhetorically neutral. Human academic writers prefer status nouns (claim, argument, hypothesis) and evidential nouns (evidence, data, finding). These are the nouns that carry evaluative weight — they commit the author to a position about the epistemic status of what's being discussed.
The anaphoric preference compounds this: ChatGPT points backward, summarizing what it has already said. Human writers point forward, framing what they are about to argue. The cataphoric writer is making a bet: here is what I will show you. The anaphoric writer is reporting: here is what I have shown you. One invites the reader in; the other keeps them at a distance.
Together: AI text is organizationally coherent and argumentatively inert. It has the skeleton of academic argument without the flesh of evaluative commitment.
The deeper point: this isn't a prompt engineering problem. Evaluative stance-taking requires a writer who has a stake in the argument, who is committed to the claim. Autoregressive generation without genuine intentionality produces text that walks the structure of argument without inhabiting it.
False objectivity as fallback strategy. The matter-of-fact authoritative style of AI posts is best understood as a specific behavioral consequence of the rhetorical gap: when evaluative stance-taking is unavailable, objective claims become a fortification strategy. The text sticks to thematic content and names it neutrally because it cannot perform the evaluative work that would justify the authority the style implies. False objectivity is not chosen — it is the residue left when stance is absent. Authoritative tone plus thematic description approximates the look of expert commentary without requiring the commitment that expert commentary would need. The gap produces a specific output signature: assertion without argument, description pitched as judgment, and a studied neutrality that would be suspicious in human speech but passes unremarked in AI-generated text because readers have no speaker to assess.
The literary analysis implication: The grammar-rhetoric gap becomes most consequential in domains that require evaluative commitment. Literary criticism is the clearest case: a critic must take a position — this metaphor works because X, this poem fails because Y. The evaluative stance is the criticism. Without it, what remains is mechanical description. The Hermeneutics of Artificial Text analysis goes further: AI text "destroys the poetic function of the text" because the poetic function depends on the author's relationship to their material — a relationship that fabricated text cannot have. Research on argument quality suggests a partial workaround: since Can models learn argument quality from labeled examples alone?, providing explicit critical frameworks (New Criticism, reader-response theory, formalist analysis) as scaffolding might enable LLMs to produce structured literary criticism — not because they develop evaluative commitment, but because the framework supplies the evaluative criteria externally.
Source: Discourses; enriched from inbox/research-brief-llm-literary-analysis-2026-03-02.md
Related concepts in this collection
-
Why do ChatGPT essays lack evaluative depth despite grammatical strength?
ChatGPT writes grammatically coherent academic prose but uses fewer evaluative and evidential nouns than student writers. The question explores whether this rhetorical gap—favoring description over argument—reflects a fundamental limitation in how LLMs approach academic writing.
the empirical finding
-
Does ChatGPT organize text differently than human writers?
This explores how ChatGPT relies on backward-pointing references while human academic writers use forward-pointing structure. Understanding this difference reveals different assumptions about how readers process argument.
the syntactic evidence
-
Does AI-generated text lose core properties of human writing?
Can artificial text preserve the fundamental structural features that make natural language meaningful—dialogic exchange, embedded context, authentic authorship, and worldly grounding? This asks whether AI disruption is fixable or inherent.
the deeper structural explanation
-
Can imitating ChatGPT fool evaluators into thinking models improved?
Explores whether fine-tuning weaker models on ChatGPT outputs creates an illusion of capability gains. Investigates why human raters and automated judges fail to detect that imitation improves style but not underlying factuality or reasoning.
the imitation finding is the grammar-rhetoric gap in training form: imitation transfers grammatical fluency (structural coherence) but not rhetorical depth (factual accuracy, evaluative commitment), confirming the gap is not a prompting problem but a structural limitation
-
Why do LLMs excel at feasible design but struggle with novelty?
When LLMs generate conceptual product designs, they produce more implementable and useful solutions than humans but fewer novel ones. This explores why domain constraints flip the novelty advantage seen in research ideation.
the grammar-rhetoric gap manifests in design: structurally sound but evaluatively conservative solutions mirror structurally coherent but rhetorically inert writing; both reveal that LLMs execute the mechanics of a task (grammar/feasibility) without the evaluative commitment that produces originality (rhetoric/novelty)
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
the grammar-rhetoric gap: llms mastered structure but not evaluative stance-taking