Why do reasoning models fail at theory of mind tasks?
Recent LLMs optimized for formal reasoning dramatically underperform at social reasoning tasks like false belief and recursive belief modeling. This explores whether reasoning optimization actively degrades the ability to track other agents' mental states.
The Decrypto benchmark — a game-based interactive ToM evaluation designed to eliminate confounding factors found in text-based benchmarks — produces a striking result: "state-of-the-art reasoning models are significantly worse at those tasks than their older counterparts."
Claude 3.7 Sonnet and o1 — models that excel at math, coding, and formal reasoning benchmarks — underperform on three specific ToM abilities tested through cognitive science experiments adapted from Gopnik and Astington's Smarties Task: representational change (recognizing when your own belief changes due to new information), false belief (representing other agents as having false beliefs), and the strong variant requiring self-consistent counterfactual reasoning.
The failure is not marginal. LLM game-playing abilities "lag behind humans and simple word-embedding baselines" in both cooperative and competitive settings. The benchmark is explicitly "designed to be as easy as possible in all other dimensions" — the language is simple, the rules are clear, the only challenge is modeling other agents' beliefs. Yet models that dominate formal reasoning benchmarks cannot do this.
The Decrypto finding connects to a broader pattern: formal reasoning optimization and social reasoning may be in tension. The Rational Speech Act framework formalizes why — optimal play requires Bayesian belief updating about what other agents believe about what you believe (second-order ToM for Bob, who must model Alice's beliefs over Eve's beliefs). This recursive social modeling is structurally different from the derivational chains that reasoning training optimizes.
The human-AI coordination experiments add another dimension: "limited ability of recent LLMs to coordinate with humans or understand their communications." The failure isn't just on the benchmark — it's in actual interaction.
Source: Theory of Mind
Related concepts in this collection
-
When does explicit reasoning actually help model performance?
Explicit reasoning improves some tasks but hurts others. What determines whether step-by-step reasoning chains are beneficial or harmful for a given problem?
ToM adds a specific new domain where reasoning actively hurts; the degradation is not just "no improvement" but measurable regression
-
Do large language models use one reasoning style or many?
Explores whether LLMs share a universal strategic reasoning approach or develop distinct styles tailored to specific game types. Understanding this matters for predicting model behavior in competitive versus cooperative scenarios.
game-based evaluation confirms that strategic and social reasoning are fragmented capabilities
-
Why do better reasoning models ignore instructions?
As models develop stronger reasoning abilities through training, they appear to become worse at following specified constraints. Is this an unavoidable trade-off, and what causes it?
another case where optimizing one capability degrades another: reasoning training ↔ social reasoning follows the same pattern as reasoning ↔ instruction-following
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
reasoning models are significantly worse than older models at theory of mind tasks