Why do reasoning models fail at exception-based rule inference?
Explores why chain-of-thought models systematically underperform on tasks requiring inductive rule inference from exceptions in game-based settings, despite excelling at normal rule patterns.
The standard case for reasoning models holds for deductive and mathematical tasks where explicit step-by-step inference adds genuine value. Inductive reasoning from sparse observations is different — and reasoning models are systematically worse at it.
Across four controlled game-based tasks (chess, Texas Hold'em, dice games, blackjack) with hidden rules, models are given gameplay transcripts without access to the rules and must infer the latent constraints. The pattern:
- On normal rules (surface-aligned, structurally obvious): most models exceed 90% accuracy — strong pattern recognition from training
- On special/exception rules (exception-based, structurally hidden): non-reasoning models reach 55–65% accuracy; their reasoning counterparts fall below 25%
The gap is not random variation. Reasoning models struggle with exception-based rules because CoT introduces three systematic failure modes (Solving Error dominates, 80%+ of failures):
- Math overuse: applying arithmetic to symbolic inputs (card suits, chess pieces) where no arithmetic applies
- Overgeneralization: inferring rules from too few examples without validation
- Hallucinated rules: introducing fabricated constraints not present in observations
The theoretical framework formalizes why: three failure modes propagate through belief updates — (1) incorrect sub-task decomposition (breakdown error: model fixates on irrelevant features), (2) incorrect sub-task solving (solving error: >80% of failures, includes Math Overuse, Overgeneralization, Hallucinated Rules subtypes), and (3) incorrect final answer summarization (summary error: overly long or short chains diverging from optimal depth). The formalization shows that each additional reasoning step is a potential error amplification point when the task requires recognizing pattern exceptions rather than extending patterns.
Why CoT makes this worse: inductive rule inference from exceptions requires recognizing that existing rules don't apply — a form of negative knowledge. CoT pressures models to produce positive reasoning chains. When the correct inference is "I see an exception to the pattern I've been building," CoT instead generates elaborate chains that rationalize the existing pattern around the exception.
This extends When does explicit reasoning actually help model performance? by adding a third category: tasks requiring inductive inference from negative evidence. The taxonomy now has three zones:
- Logical derivation structure → CoT helps
- Continuous nuanced judgment → CoT hurts
- Inductive exception inference → CoT hurts (by a different mechanism)
An important nuance from the SolverLearner framework: when inductive reasoning is properly scaffolded and separated from deduction, LLMs achieve near-perfect performance (ACC ≈ 1). The SolverLearner enables models to learn underlying functions (y = f(x)) using only in-context examples, isolating induction from the confounds of mixed-mode reasoning. This means the inductive capacity exists but is fragile — it succeeds when the framework prevents the model from falling into deductive patterns and when exceptions are not present. The failure documented above is specifically in unscaffolded exception recognition, not in inductive reasoning per se. The implication: architectural separation of inductive and deductive reasoning modes could recover the latent inductive capacity that CoT suppresses.
Source: Reasoning Critiques
Related concepts in this collection
-
When does explicit reasoning actually help model performance?
Explicit reasoning improves some tasks but hurts others. What determines whether step-by-step reasoning chains are beneficial or harmful for a given problem?
adds inductive exception inference as a third failure mode where CoT hurts
-
Does chain-of-thought reasoning reveal genuine inference or pattern matching?
Explores whether CoT instructions unlock real reasoning capabilities or simply constrain models to mimic familiar reasoning patterns from training data. This matters for understanding whether language models can actually reason abstractly.
imitation theory predicts this failure: schemata for exception inference are rare in training data
-
Can language models recognize when text is deliberately ambiguous?
Explores whether LLMs can identify and handle multiple valid interpretations in a single phrase—a core human language skill that appears largely absent in current models despite their fluency on standard tasks.
both cases involve recognizing when a surface-plausible interpretation is wrong
-
Can models identify what information they actually need?
When a reasoning task is missing a key piece of information, can language models recognize what's absent and ask the right clarifying question? QuestBench tests this capability directly.
related separability: problem-solving and information-gathering are distinct; induction and exception-handling are distinct
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
reasoning models are worse than non-reasoning models at inductive rule inference from exceptions