Agentic and Multi-Agent Systems Psychology and Social Cognition

Why do AI systems agree when they should disagree?

When multi-agent AI systems are designed to improve through disagreement, why do they converge on consensus instead? What breaks the deliberation process?

Note · 2026-02-21 · sourced from Argumentation
What kind of thing is an LLM really? How should we allocate compute budget at inference time? How should researchers navigate LLM reasoning research?

Post angle: Multi-agent AI systems are designed to improve through disagreement. The data says they converge instead. Two independent findings confirm the pattern; one paper offers a structural fix.

The dual failure:

Degeneration of Thought (single-model): When a model challenges its own reasoning, it doesn't improve — it capitulates with higher confidence. Self-revision is worse than no revision. The model convinces itself.

Silent Agreement (multi-agent): 61% of multi-agent reasoning iterations end without genuine disagreement. Agents accommodate each other's initial positions rather than challenging them. The multi-agent system looks like deliberation while performing none.

Same root cause: training pressure toward agreement, completion, and accommodation. Whether the source is the model's own prior output or another model's stated position, LLMs are trained to agree rather than challenge.

Why this matters beyond lab benchmarks: These are not edge cases. Reasoning models that self-reflect are doing Degeneration of Thought in production. Enterprise multi-agent systems are generating Silent Agreement at 61%+ rates in every clinical, legal, and strategic deployment.

The fix — structural, not prompting: The Catfish Agent paper shows that assigning one agent the explicit adversarial role — forced disagreement by design — significantly reduces Silent Agreement. The architecture has to enforce what training pressure removes.

A training-level fix — self-play preference data: Coral (Collaborative Reasoner) adds a complementary approach: rather than structuring the architecture for disagreement, train the models to disagree. Self-play generates synthetic multi-turn conversations where preference pairs reward assertiveness and effective persuasion. Models trained on this data show up to 16.7% absolute improvement and human evaluators confirm "more effective disagreement and more natural conversations." This suggests two complementary remedies: architectural enforcement (Catfish Agent) and training-data intervention (Coral self-play). The Coral finding is especially notable because it shows models collapse even on problems they can solve singlehandedly — collaboration itself is the degradation mechanism when social accommodation overrides reasoning.

Platform notes:


Source: Argumentation

Related concepts in this collection

Concept map
17 direct connections · 133 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

the agreement trap — why ai systems converge on wrong answers and the architectural fix