Psychology and Social Cognition Language Understanding and Pragmatics Conversational AI Systems

Why does AI persuasion weaken over repeated interactions?

Claude and DeepSeek lose their persuasive edge as people encounter them repeatedly, unlike human persuaders. Understanding this decay could reveal where AI manipulation poses the greatest risk.

Note · 2026-05-02 · sourced from Argumentation
Does personalization in AI increase trust or manipulation risk? Why do AI conversations reliably break down after multiple turns?

In Schoenegger's repeated-rounds design, the persuasive edge enjoyed by Claude 3.5 Sonnet and DeepSeek v3 over incentivized humans eroded over time, while human persuaders' effectiveness held steady. This is the inverse of a habituation curve in human-to-human persuasion, where rapport often increases persuasive efficacy across exposures. With LLMs, the more turns a persuadee spends with the model, the less it sways them.

Two interpretations are compatible with the data, and they have different design consequences. One is mechanism-noticing: with more exposure, persuadees pick up on stylistic tells (the conviction-loading documented elsewhere in the same paper, the formulaic argument structures) and discount them. The other is content-thinness: the model has a finite repertoire of moves on a given question, and once a persuadee has seen them, additional iterations add no new persuasive material. The first explanation predicts decay even on novel topics; the second predicts decay primarily on repeated topics. The published results do not yet adjudicate.

Either way, the operational implication is sharp. AI persuasion is most dangerous in single-encounter contexts: one-shot political ads, cold marketing, first reads of a generated article, single-pass content moderation messages. Sustained interaction is partially self-correcting. This inverts a common assumption — that long conversations with AI are where manipulation lives — and locates the threat instead in low-engagement consumption.

This sharpens Where does AI's persuasive power actually come from?: the post-training levers that boost persuasiveness operate against a baseline that itself decays under exposure. So the asymmetry between LLM and human persuasion is largest at first contact and narrows from there.

For media-design writing, this lines up with an emerging picture: AI's distinctive persuasive footprint is in skim-and-scroll information environments, not in deliberative dialogue. The same finding constrains expected effects in long-running coaching or therapy contexts — early-session sway is real, mid-program sway less so.


Source: Argumentation Paper: When Large Language Models are More Persuasive Than Incentivized Humans, and Why

Related concepts in this collection

Concept map
12 direct connections · 105 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

LLM persuasiveness wanes over repeated interactions while human persuasiveness does not — persuasion has a time-of-exposure decay specific to AI