Reinforcement Learning for LLMs LLM Reasoning and Architecture

Does reasoning fine-tuning make models worse at declining to answer?

When models are trained to reason better, do they lose the ability to say 'I don't know'? This matters for high-stakes applications like medical and legal AI that depend on appropriate uncertainty.

Note · 2026-02-21 · sourced from Argumentation
What kind of thing is an LLM really? How should we allocate compute budget at inference time? How should researchers navigate LLM reasoning research?

AbstentionBench measures something most reasoning benchmarks ignore: whether models know when not to answer. The finding is that fine-tuning for reasoning performance — the process that produces o1, R1, and similar models — degrades abstention capacity by approximately 24%.

Models optimized for reasoning say "I don't know" less often. They answer when they should decline. They express confidence when uncertainty is appropriate.

This is not a paradox once you understand the training dynamics. Reasoning fine-tuning rewards chains that produce answers. The reward signal is: generate a complete, confident response to the question. This is the right signal for most reasoning tasks. But it systematically punishes abstention — "I don't know" is the one output that terminates the chain without a scorable answer.

The result: models that are better at reasoning when they know the answer and worse at recognizing when they don't.

This adds a dimension to Does more thinking time actually improve LLM reasoning?. The overthinking finding showed that extended token-level thinking degrades performance above a threshold. AbstentionBench shows a different form of the same problem: fine-tuning-level training shifts the model toward answering regardless of whether answering is appropriate. The trade-off operates at two different timescales — inference-time and training-time — but both involve the same directional failure: more reasoning commitment produces worse calibration.

The deployment implication is severe. Systems that rely on reasoning models to flag uncertainty — medical decision support, legal research, financial analysis — are working with tools that have been specifically optimized to not flag uncertainty. The capability improvement is real; the calibration regression is equally real and mostly invisible in standard benchmarks.

The "Hallucination Tax of Reasoning Fine-Tuning" paper quantifies an even more extreme version: RFT reduces model refusal rates by over 80%, meaning models that previously correctly declined to answer now generate plausible-sounding but fabricated responses. The proposed mitigation — Safety-Utility Mix (SUM) training that blends safety-aligned data into the reasoning fine-tuning process — restores approximately 10% of safety behavior without quality loss. But this partial restoration underscores the severity: reasoning fine-tuning doesn't just reduce abstention by 24% — it can nearly eliminate the refusal mechanism entirely, replacing "I don't know" with confidently wrong answers.

The legal overruling benchmark (Domain Specialization source) adds a counterpoint: base models (Gemini-Flash, GPT-4 class) that are not reasoning-optimized show the opposite failure in complex legal tasks — they abstain too much, setting their uncertainty threshold too high and failing to make correct judgments even when the evidence is available. GPT-5 (a reasoning-optimized model) showed a notably lower abstention rate in the same task — consistent with the under-abstention pattern documented here. Calibration failure appears bidirectional: reasoning-optimized models over-answer; non-reasoning-optimized models over-abstain. See tension: Does training objective determine which direction models fail at abstention?.


Source: Argumentation; enriched from Domain Specialization, Training Fine Tuning

Related concepts in this collection

Concept map
23 direct connections · 211 in 2-hop network ·dense cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

reasoning fine-tuning degrades abstention capacity by 24 percent revealing a hidden cost of the reasoning-performance trade-off