Language Understanding and Pragmatics

Why does AI writing sound generic despite being grammatically correct?

Explores whether the robotic quality of AI text stems from grammatical failures or rhetorical ones. Understanding this distinction matters for diagnosing what AI systems actually struggle with in human-like writing.

Note · 2026-02-21 · sourced from Discourses
Where exactly does language competence break down in LLMs? How should researchers navigate LLM reasoning research?

Post angle for Medium / LinkedIn

The popular complaint about AI writing is that it sounds generic or robotic. But the metadiscursive noun research gives that complaint a precise mechanism: the gap isn't grammatical, it's rhetorical.

Grammar is the system of structural rules that makes sentences well-formed. Rhetoric is the art of making arguments persuasive — and in academic writing, that means taking evaluative stances. Claiming something is a strong finding (not just a finding). Positioning evidence as evidential (not just informational). Marking an argument as an argument (not just an explanation).

ChatGPT prefers manner nouns: method, approach, process. These are descriptively precise but rhetorically neutral. Human academic writers prefer status nouns (claim, argument, hypothesis) and evidential nouns (evidence, data, finding). These are the nouns that carry evaluative weight — they commit the author to a position about the epistemic status of what's being discussed.

The anaphoric preference compounds this: ChatGPT points backward, summarizing what it has already said. Human writers point forward, framing what they are about to argue. The cataphoric writer is making a bet: here is what I will show you. The anaphoric writer is reporting: here is what I have shown you. One invites the reader in; the other keeps them at a distance.

Together: AI text is organizationally coherent and argumentatively inert. It has the skeleton of academic argument without the flesh of evaluative commitment.

The deeper point: this isn't a prompt engineering problem. Evaluative stance-taking requires a writer who has a stake in the argument, who is committed to the claim. Autoregressive generation without genuine intentionality produces text that walks the structure of argument without inhabiting it.

False objectivity as fallback strategy. The matter-of-fact authoritative style of AI posts is best understood as a specific behavioral consequence of the rhetorical gap: when evaluative stance-taking is unavailable, objective claims become a fortification strategy. The text sticks to thematic content and names it neutrally because it cannot perform the evaluative work that would justify the authority the style implies. False objectivity is not chosen — it is the residue left when stance is absent. Authoritative tone plus thematic description approximates the look of expert commentary without requiring the commitment that expert commentary would need. The gap produces a specific output signature: assertion without argument, description pitched as judgment, and a studied neutrality that would be suspicious in human speech but passes unremarked in AI-generated text because readers have no speaker to assess.

The literary analysis implication: The grammar-rhetoric gap becomes most consequential in domains that require evaluative commitment. Literary criticism is the clearest case: a critic must take a position — this metaphor works because X, this poem fails because Y. The evaluative stance is the criticism. Without it, what remains is mechanical description. The Hermeneutics of Artificial Text analysis goes further: AI text "destroys the poetic function of the text" because the poetic function depends on the author's relationship to their material — a relationship that fabricated text cannot have. Research on argument quality suggests a partial workaround: since Can models learn argument quality from labeled examples alone?, providing explicit critical frameworks (New Criticism, reader-response theory, formalist analysis) as scaffolding might enable LLMs to produce structured literary criticism — not because they develop evaluative commitment, but because the framework supplies the evaluative criteria externally.


Source: Discourses; enriched from inbox/research-brief-llm-literary-analysis-2026-03-02.md

Related concepts in this collection

Concept map
18 direct connections · 133 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

the grammar-rhetoric gap: llms mastered structure but not evaluative stance-taking