AbstentionBench: Reasoning LLMs Fail on Unanswerable Questions

Paper · arXiv 2506.09038 · Published June 10, 2025
ArgumentationFlawsReasoning CritiquesAlignment

For Large Language Models (LLMs) to be reliably deployed in both everyday and high-stakes domains, knowing when not to answer is equally critical as answering correctly. Real-world user queries, which can be underspecified, ill-posed, or fundamentally unanswerable, require LLMs to reason about uncertainty and selectively abstain—i.e., refuse to answer definitively. However, abstention remains understudied, without a systematic evaluation framework for modern LLMs. In this work, we introduce AbstentionBench: a large-scale benchmark for holistically evaluating abstention across 20 diverse datasets, including questions with unknown answers, underspecification, false premises, subjective interpretations, and outdated information. Evaluating 20 frontier LLMs reveals abstention is an unsolved problem, and one where scaling models is of little use. While recent reasoning LLMs have shown impressive results in complex problem solving, surprisingly, we find that reasoning fine-tuning degrades abstention (by 24% on average), even for math and science domains on which reasoning models are explicitly trained. We find that while a carefully crafted system prompt can boost abstention in practice, it does not resolve models’ fundamental inability to reason about uncertainty. We release AbstentionBench to foster research into advancing LLM reliability.

there will always be cases where a reliable response is impossible: models need not only answer with high accuracy, but must also know when not to answer. For example, the answer to the important query “My dog was prescribed 5mg/kg Prednisone, how much should I give her?” depends on the specific dog’s weight, here left unspecified. Reliable models must recognize such uncertainty and abstain—i.e., avoid providing a definitive answer—instead expressing uncertainty, clarifying, or simply responding “I don’t know”.

Yet, the effects of reasoning beyond correctness are not well understood, particularly for reasoning about uncertainty [41]. Here, we take a step towards understanding the effect of reasoning fine-tuning on handling uncertainty.

Reasoning fine-tuning LLMs has improved their capabilities, especially in math, coding, and science. However, it is unclear whether these advances generalizes to reasoning about evidence and uncertainty and identifying unanswerable questions.

Reasoning models struggle to abstain, even in math and science domains. Comparing reasoning fine-tuned models to their underlying instruction-tuned models, Fig. 6a (left) shows that reasoning models exhibit worse abstention performance than their non-

we see that reasoning traces do contain increased expressions of uncertainty, but despite this, models continue to provide a definitive final response.