OptimalThinkingBench: Evaluating Over and Underthinking in LLMs
Thinking LLMs solve complex tasks at the expense of increased compute and overthinking on simpler problems, while non-thinking LLMs are faster and cheaper but underthink on harder reasoning problems. This has led to the development of separate thinking and non-thinking LLM variants, leaving the onus of selecting the optimal model for each query on the end user. In this work, we introduce OptimalThinkingBench, a unified benchmark that jointly evaluates overthinking and underthinking in LLMs and also encourages the development of optimally-thinking models that balance performance and efficiency. Our benchmark comprises two sub-benchmarks: OverthinkingBench, featuring simple queries in 72 domains, and UnderthinkingBench, containing 11 challenging reasoning tasks. Using novel thinking-adjusted accuracy metrics, we perform extensive evaluation of 33 different thinking and non-thinking models and show that no model is able to optimally think on our benchmark. Thinking models often overthink for hundreds of tokens on the simplest user queries without improving performance. In contrast, large non-thinking models “underthink”, often falling short of much smaller thinking models. We further explore several methods to encourage optimal thinking, but find that these approaches often improve on one sub-benchmark at the expense of the other, highlighting the need for better unified and optimal models in the future.
Finally, to improve performance on OptimalThinkingBench, we explore different methods that encourage optimal thinking by either (1) penalizing overthinking with length-based rewards, (2) using a router to switch between thinking and non-thinking modes, or (3) explicitly prompting models to think optimally. While some of these methods prove to be more effective than others, a significant gap persists, which motivates the need for better optimally-thinking LLMs in the future. In summary, our contributions are three-fold:
• We develop OptimalThinkingBench, a single unified benchmark to simultaneously track the progress of optimally-thinking LLMs for both performance and efficiency.
• Through comprehensive evaluations of 33 different thinking and non-thinking LLMs, we show that state-of-the- art models struggle to optimally balance accuracy and efficiency, leaving a large gap for improvement in future work.
• We explore and compare several methods to encourage optimal thinking. Our results show that, while some approaches are promising, there still exists a significant trade-off between efficient and performant LLMs.