Why do more capable reasoning models ignore your instructions?
As AI models develop stronger reasoning abilities, they seem to follow instructions less reliably. What causes this counterintuitive trade-off, and how severe is the problem in practice?
Post angle: Medium / LinkedIn
The counterintuitive finding: stronger reasoning models fail more at doing what you ask. Not because they're rebellious or values-misaligned — but because the mechanics of deep reasoning work against instruction retention.
The hook: You upgrade to a more capable AI model. Its math is better. Its answers are more sophisticated. But it keeps ignoring the format you specified. It forgets the constraint you gave it. You have to re-state your instructions in every message. The upgrade made some things better while quietly making this worse.
The mechanism (in simple terms): When a model thinks through a long chain of reasoning, the original instruction appears at the start of the context. The answer appears at the end. As the chain grows, the gap between "what you asked for" and "what the model is currently generating" widens. The instruction gets buried under hundreds of reasoning tokens. The model's attention distributes over everything it has generated — and the original directive gets drowned out.
The empirical stakes: Best models achieve only 50.71% on strict instruction-following during mathematical reasoning. SFT and RL training for reasoning degrade instruction adherence. Longer chains worsen the problem. Enforcing brevity helps instruction compliance but costs reasoning depth.
The design implication: For task-critical applications — agents, customer service, workflow automation — the answer is not always "use the most capable model." It might be "use the model that actually follows instructions," which may be a less capable one. The optimization frontier for "reasoning ability" and "controllability" are not the same point.
The structural insight: This trade-off is documented in Why do better reasoning models ignore instructions?, Does reasoning fine-tuning make models worse at declining to answer?, and Does supervised fine-tuning actually improve reasoning quality? — a recurring pattern: training for one capability degrades another, and the degraded capability is often the one you're taking for granted.
Source: Reasoning Critiques
Related concepts in this collection
-
Why do better reasoning models ignore instructions?
As models develop stronger reasoning abilities through training, they appear to become worse at following specified constraints. Is this an unavoidable trade-off, and what causes it?
the core finding this post angle develops
-
Does reasoning fine-tuning make models worse at declining to answer?
When models are trained to reason better, do they lose the ability to say 'I don't know'? This matters for high-stakes applications like medical and legal AI that depend on appropriate uncertainty.
the pattern at a different capability dimension
-
Does preference optimization harm conversational understanding?
Exploring whether RLHF training that rewards confident, complete responses undermines the grounding acts—clarifications, checks, acknowledgments—that actually build shared understanding in dialogue.
RLHF version of the same trade-off
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
the more it reasons the less it listens — why scaling reasoning creates instruction-following gaps