Language Understanding and Pragmatics

Do LLMs and humans persuade through the same mechanisms?

If AI and human arguments convince readers equally well, do they work the same way under the surface? This matters for understanding whether AI persuasion is fundamentally equivalent to human persuasion or just superficially similar.

Note · 2026-05-01 · sourced from Argumentation
How do people build trust with conversational AI? Why does conversational AI feel therapeutic when its mechanics aren't?

A 1,251-participant study of human and AI persuasion across 56 contentious claims found that LLM-generated and human-generated arguments shifted reader agreement at comparable rates. Same persuasive force. But the textual mechanisms producing that force diverged systematically. LLM arguments required higher cognitive effort to process — more grammatically complex, more lexically dense. They used moral language more heavily across positive and negative foundations. Sentiment was comparable; cognitive complexity and moral framing were not.

The authors call this "no equivalence in process despite equivalence in outcome." It is a consequential framing because it severs the standard inference from persuasive success to underlying mechanism. When two arguments persuade equally, we typically infer that they did so for similar reasons. The data here say the opposite: equivalent persuasive force can rest on entirely different rhetorical scaffolding.

For a Language as Event reading, this is precisely the place where the AI's production process and the human's interpretation come apart. The reader experiences the argument as a unified utterance from a unified speaker — a stance, a tone, a voice. But what the model produced and what the human produced are different kinds of textual artifact, achieving the same effect through non-overlapping mechanisms. The human produced an event-residue from within a communicative situation; the LLM produced an event-residue that simulates one. Their persuasive force is equivalent because the audience cannot distinguish them on textual surface. The mechanisms are different because only one of them was actually communicating.


Source: Argumentation

Related concepts in this collection

Concept map
13 direct connections · 96 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

LLMs achieve persuasive equivalence with humans through divergent strategies — equivalence in outcome without equivalence in process