Do LLMs and humans persuade through the same mechanisms?
If AI and human arguments convince readers equally well, do they work the same way under the surface? This matters for understanding whether AI persuasion is fundamentally equivalent to human persuasion or just superficially similar.
A 1,251-participant study of human and AI persuasion across 56 contentious claims found that LLM-generated and human-generated arguments shifted reader agreement at comparable rates. Same persuasive force. But the textual mechanisms producing that force diverged systematically. LLM arguments required higher cognitive effort to process — more grammatically complex, more lexically dense. They used moral language more heavily across positive and negative foundations. Sentiment was comparable; cognitive complexity and moral framing were not.
The authors call this "no equivalence in process despite equivalence in outcome." It is a consequential framing because it severs the standard inference from persuasive success to underlying mechanism. When two arguments persuade equally, we typically infer that they did so for similar reasons. The data here say the opposite: equivalent persuasive force can rest on entirely different rhetorical scaffolding.
For a Language as Event reading, this is precisely the place where the AI's production process and the human's interpretation come apart. The reader experiences the argument as a unified utterance from a unified speaker — a stance, a tone, a voice. But what the model produced and what the human produced are different kinds of textual artifact, achieving the same effect through non-overlapping mechanisms. The human produced an event-residue from within a communicative situation; the LLM produced an event-residue that simulates one. Their persuasive force is equivalent because the audience cannot distinguish them on textual surface. The mechanisms are different because only one of them was actually communicating.
Source: Argumentation
Related concepts in this collection
-
Why are complex LLM arguments as persuasive as simple ones?
Standard persuasion research predicts that simpler, easier-to-read arguments persuade better. But LLM-generated text breaks this rule—it's measurably more complex yet equally convincing. What explains this reversal?
establishes the cognitive-effort dimension
-
Do LLMs use moral language more than humans?
This explores whether large language models rely more heavily on appeals to care, fairness, authority, and sanctity than human arguers do, and whether this difference persists when emotional tone remains equivalent.
establishes the moral-language dimension
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
LLMs achieve persuasive equivalence with humans through divergent strategies — equivalence in outcome without equivalence in process