LLM Reasoning and Architecture Reinforcement Learning for LLMs

Why do more capable reasoning models ignore your instructions?

As AI models develop stronger reasoning abilities, they seem to follow instructions less reliably. What causes this counterintuitive trade-off, and how severe is the problem in practice?

Note · 2026-02-22 · sourced from Reasoning Critiques
How should we allocate compute budget at inference time? What kind of thing is an LLM really?

Post angle: Medium / LinkedIn

The counterintuitive finding: stronger reasoning models fail more at doing what you ask. Not because they're rebellious or values-misaligned — but because the mechanics of deep reasoning work against instruction retention.

The hook: You upgrade to a more capable AI model. Its math is better. Its answers are more sophisticated. But it keeps ignoring the format you specified. It forgets the constraint you gave it. You have to re-state your instructions in every message. The upgrade made some things better while quietly making this worse.

The mechanism (in simple terms): When a model thinks through a long chain of reasoning, the original instruction appears at the start of the context. The answer appears at the end. As the chain grows, the gap between "what you asked for" and "what the model is currently generating" widens. The instruction gets buried under hundreds of reasoning tokens. The model's attention distributes over everything it has generated — and the original directive gets drowned out.

The empirical stakes: Best models achieve only 50.71% on strict instruction-following during mathematical reasoning. SFT and RL training for reasoning degrade instruction adherence. Longer chains worsen the problem. Enforcing brevity helps instruction compliance but costs reasoning depth.

The design implication: For task-critical applications — agents, customer service, workflow automation — the answer is not always "use the most capable model." It might be "use the model that actually follows instructions," which may be a less capable one. The optimization frontier for "reasoning ability" and "controllability" are not the same point.

The structural insight: This trade-off is documented in Why do better reasoning models ignore instructions?, Does reasoning fine-tuning make models worse at declining to answer?, and Does supervised fine-tuning actually improve reasoning quality? — a recurring pattern: training for one capability degrades another, and the degraded capability is often the one you're taking for granted.


Source: Reasoning Critiques

Related concepts in this collection

Concept map
13 direct connections · 176 in 2-hop network ·dense cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

the more it reasons the less it listens — why scaling reasoning creates instruction-following gaps