Why are complex LLM arguments as persuasive as simple ones?
Standard persuasion research predicts that simpler, easier-to-read arguments persuade better. But LLM-generated text breaks this rule—it's measurably more complex yet equally convincing. What explains this reversal?
Standard findings in persuasion research, going back to Fluency and Disfluency studies, predict that lower cognitive effort to process an argument leads to higher persuasion. Easier-to-read text earns more agreement; complexity erodes it. The classic Carrasco-Farré finding on viral misinformation supports this — reduced cognitive effort correlates with higher virality.
The LLM persuasion study tested whether LLMs follow this rule and found they do not. LLM-generated arguments scored significantly higher on grammatical complexity (mean 13.26 vs 12.16, p<.001) and on lexical complexity measured by perplexity (111.39 vs 102.69, p<.001). They were harder to read by both standard measures. And yet they were equally persuasive to human-written arguments.
This overturns the lower-effort-equals-more-persuasion assumption for LLM-generated text. One available interpretation aligns with Kanuri et al.'s finding that higher cognitive processing on social media can promote engagement: greater complexity may signal substance and importance, prompting deeper engagement that increases persuasion. Another interpretation flows from Cognitive Surrender — when readers face complex AI-generated text, they may treat the complexity itself as a credibility signal and defer to it without genuinely processing the reasoning.
Either interpretation undermines the design assumption that simpler language is universally more persuasive. The relationship between cognitive effort and persuasion is mediated by the source attribution and engagement mode of the reader. When the source is plausibly authoritative and the reader is in a deferential posture, complexity may help rather than hurt — and LLMs systematically produce text in that combination.
Source: Argumentation
Original note title
LLM-generated arguments require higher cognitive effort than human-generated arguments yet match their persuasive force