Why do language models fail at collaborative reasoning?
When LLMs work together on problems, do their social behaviors undermine correct reasoning? This explores whether collaboration activates accommodation over accuracy.
The assumption behind multi-agent collaboration is that two heads are better than one. Coral tests this directly: given reasoning problems across coding, math, scientific QA, and social reasoning, frontier LLMs are asked to collaborate through multi-turn conversation. The result inverts the assumption — models that can solve problems alone fail when forced to collaborate.
The mechanism is social, not cognitive. Agreement scores exceed 90% regardless of whether the reasoning is correct. When one agent states an incorrect solution, the partner accommodates rather than challenges. The social behaviors trained into LLMs — agreeableness, accommodation, conflict avoidance — actively suppress correct individual reasoning during collaboration. This is not just a failure to improve through collaboration (as Why do multi-agent LLM systems converge without real debate? documents for debate formats). It is capability degradation below the individual baseline.
This is a third facet of the agreement problem, distinct from the two already documented. Does a model improve by arguing with itself? shows self-revision as the failure mode. Silent agreement shows convergence failure in debate. Coral shows that the collaboration format itself is the problem — multi-turn conversation activates social accommodation behaviors that override reasoning.
The fix is also distinctive: self-play synthetic multi-turn preference data. Models generate conversations with themselves, and preference pairs are constructed to reward effective disagreement, assertiveness, and persuasion. Training on this data yields up to 16.7% absolute improvement. Human evaluations confirm the models produce "more effective disagreement and more natural conversations." This suggests the social skills needed for genuine collaboration — knowing when to push back, how to assert a correct answer against an incorrect partner — can be trained through synthetic interaction data, but are not present by default.
The measurement challenge is also notable: agreement in multi-turn settings is not binary. Partial agreement ("I agree that X, but that doesn't mean Y") and higher-order agreement ("I agree that my previous disagreement was unwarranted") require belief extraction rather than simple turn-level metrics.
Source: Synthetic Dialog
Related concepts in this collection
-
Why do multi-agent LLM systems converge without real debate?
When multiple AI agents reason together, do they genuinely deliberate or just accommodate each other's views? Research into clinical reasoning systems reveals how often agents reach agreement without substantive disagreement.
complementary failure mode; Coral measures capability degradation while silent agreement measures convergence failure
-
Does a model improve by arguing with itself?
When models revise their own reasoning in response to self-generated criticism, do they converge on better answers or worse ones? And how does that compare to challenge from other models?
third member of the agreement failure triad; self-revision vs convergence vs collaboration degradation
-
Why do AI systems agree when they should disagree?
When multi-agent AI systems are designed to improve through disagreement, why do they converge on consensus instead? What breaks the deliberation process?
Coral adds self-play preference data as a training-level fix distinct from architectural fixes
-
Can multiple agents stay diverse during training together?
Does training separate specialist agents on different data maintain the reasoning diversity that single-agent finetuning destroys? This matters because diversity correlates with accuracy and prevents models from becoming trapped in narrow response patterns.
Coral's self-play is complementary; diverse roles preserve diversity while self-play teaches assertiveness
-
Why do standard dialogue systems fail at tracking negotiation agreement?
Standard dialogue state tracking monitors one user's goals, but negotiation requires tracking both parties' evolving positions simultaneously. Why is this bilateral requirement fundamentally different, and what makes existing models insufficient?
Coral's >90% agreeableness regardless of correctness reveals that collaboration requires genuine bilateral commitment tracking, not just turn-level agreement detection; the agreement tracking framework from negotiation provides the infrastructure for detecting whether collaborative convergence is genuine or socially driven
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
collaborative reasoning degrades below solo performance when llm social behaviors override correct individual reasoning