Language Understanding and Pragmatics Psychology and Social Cognition

Do large language models persuade better than humans?

Does LLM persuasiveness hold up when humans have real financial incentives to win? And does the advantage look the same across different models and persuasion goals?

Note · 2026-05-02 · sourced from Argumentation
Does personalization in AI increase trust or manipulation risk? How do people build trust with conversational AI?

The Schoenegger 2025 design closes a long-standing gap in persuasion research: human persuaders had real financial incentives to win, and quiz takers had incentives to answer correctly. Under those conditions, the headline "LLMs are more persuasive than humans" splits along two seams that the popular framing collapses.

First, direction matters. Claude 3.5 Sonnet beat incentivized human persuaders in both truthful and deceptive contexts — increasing accuracy when nudging toward correct answers and decreasing it when nudging toward wrong answers. DeepSeek v3 beat humans only in the deceptive direction. So "more persuasive" is not a property of LLMs as a class; it is a property of specific architectures interacting with specific persuasion goals.

Second, the asymmetry survives the incentive control. Critics of earlier persuasion studies could plausibly argue that humans were not really trying. Schoenegger pays them. The advantage holds anyway — at least for Claude across both directions and for DeepSeek in the deceptive direction. This is the strongest version of the claim available.

This refines Where does AI's persuasive power actually come from?. The Levers paper documented a tradeoff between persuasiveness and accuracy at the training-method level. Schoenegger gives behavioral evidence at the deployment level: the same model wins toward truth and toward falsehood, which means the persuasion mechanism is content-independent. The model is not arguing better when it argues for true claims — it is arguing equally well in both directions.

Connects also to Does any single persuasion technique work for everyone? in an unexpected way: model family is itself a contextual moderator. The persuasion-effectiveness landscape is not Claude-vs-DeepSeek-vs-humans on a single axis; it is a multidimensional surface where direction, model, and recipient interact.

For writing about AI persuasion, the operational implication: refuse the singular question "are LLMs more persuasive than humans?" The right form is "which LLM, in which direction, against which audience?"


Source: Argumentation Paper: When Large Language Models are More Persuasive Than Incentivized Humans, and Why

Related concepts in this collection

Concept map
12 direct connections · 81 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

LLM persuasion advantage is asymmetric across truthful vs deceptive contexts and reverses across model families