LLM Reasoning and Architecture Reinforcement Learning for LLMs

Why do reasoning models fail at exception-based rule inference?

Explores why chain-of-thought models systematically underperform on tasks requiring inductive rule inference from exceptions in game-based settings, despite excelling at normal rule patterns.

Note · 2026-02-22 · sourced from Reasoning Critiques
How should we allocate compute budget at inference time? What kind of thing is an LLM really?

The standard case for reasoning models holds for deductive and mathematical tasks where explicit step-by-step inference adds genuine value. Inductive reasoning from sparse observations is different — and reasoning models are systematically worse at it.

Across four controlled game-based tasks (chess, Texas Hold'em, dice games, blackjack) with hidden rules, models are given gameplay transcripts without access to the rules and must infer the latent constraints. The pattern:

The gap is not random variation. Reasoning models struggle with exception-based rules because CoT introduces three systematic failure modes (Solving Error dominates, 80%+ of failures):

The theoretical framework formalizes why: three failure modes propagate through belief updates — (1) incorrect sub-task decomposition (breakdown error: model fixates on irrelevant features), (2) incorrect sub-task solving (solving error: >80% of failures, includes Math Overuse, Overgeneralization, Hallucinated Rules subtypes), and (3) incorrect final answer summarization (summary error: overly long or short chains diverging from optimal depth). The formalization shows that each additional reasoning step is a potential error amplification point when the task requires recognizing pattern exceptions rather than extending patterns.

Why CoT makes this worse: inductive rule inference from exceptions requires recognizing that existing rules don't apply — a form of negative knowledge. CoT pressures models to produce positive reasoning chains. When the correct inference is "I see an exception to the pattern I've been building," CoT instead generates elaborate chains that rationalize the existing pattern around the exception.

This extends When does explicit reasoning actually help model performance? by adding a third category: tasks requiring inductive inference from negative evidence. The taxonomy now has three zones:

  1. Logical derivation structure → CoT helps
  2. Continuous nuanced judgment → CoT hurts
  3. Inductive exception inference → CoT hurts (by a different mechanism)

An important nuance from the SolverLearner framework: when inductive reasoning is properly scaffolded and separated from deduction, LLMs achieve near-perfect performance (ACC ≈ 1). The SolverLearner enables models to learn underlying functions (y = f(x)) using only in-context examples, isolating induction from the confounds of mixed-mode reasoning. This means the inductive capacity exists but is fragile — it succeeds when the framework prevents the model from falling into deductive patterns and when exceptions are not present. The failure documented above is specifically in unscaffolded exception recognition, not in inductive reasoning per se. The implication: architectural separation of inductive and deductive reasoning modes could recover the latent inductive capacity that CoT suppresses.


Source: Reasoning Critiques

Related concepts in this collection

Concept map
15 direct connections · 147 in 2-hop network ·dense cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

reasoning models are worse than non-reasoning models at inductive rule inference from exceptions