LLM Reasoning and Architecture Psychology and Social Cognition Language Understanding and Pragmatics

Why do reasoning models fail at theory of mind tasks?

Recent LLMs optimized for formal reasoning dramatically underperform at social reasoning tasks like false belief and recursive belief modeling. This explores whether reasoning optimization actively degrades the ability to track other agents' mental states.

Note · 2026-02-22 · sourced from Theory of Mind
How should researchers navigate LLM reasoning research? Where exactly do reasoning models fail and break? Why do LLMs excel at social norms yet fail at theory of mind?

The Decrypto benchmark — a game-based interactive ToM evaluation designed to eliminate confounding factors found in text-based benchmarks — produces a striking result: "state-of-the-art reasoning models are significantly worse at those tasks than their older counterparts."

Claude 3.7 Sonnet and o1 — models that excel at math, coding, and formal reasoning benchmarks — underperform on three specific ToM abilities tested through cognitive science experiments adapted from Gopnik and Astington's Smarties Task: representational change (recognizing when your own belief changes due to new information), false belief (representing other agents as having false beliefs), and the strong variant requiring self-consistent counterfactual reasoning.

The failure is not marginal. LLM game-playing abilities "lag behind humans and simple word-embedding baselines" in both cooperative and competitive settings. The benchmark is explicitly "designed to be as easy as possible in all other dimensions" — the language is simple, the rules are clear, the only challenge is modeling other agents' beliefs. Yet models that dominate formal reasoning benchmarks cannot do this.

The Decrypto finding connects to a broader pattern: formal reasoning optimization and social reasoning may be in tension. The Rational Speech Act framework formalizes why — optimal play requires Bayesian belief updating about what other agents believe about what you believe (second-order ToM for Bob, who must model Alice's beliefs over Eve's beliefs). This recursive social modeling is structurally different from the derivational chains that reasoning training optimizes.

The human-AI coordination experiments add another dimension: "limited ability of recent LLMs to coordinate with humans or understand their communications." The failure isn't just on the benchmark — it's in actual interaction.


Source: Theory of Mind

Related concepts in this collection

Concept map
15 direct connections · 119 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

reasoning models are significantly worse than older models at theory of mind tasks