Beyond Binary Rewards: Training LMs to Reason About Their Uncertainty

Paper · arXiv 2507.16806 · Published July 22, 2025
Reasoning by ReflectionReasoning CritiquesReward ModelsReinforcement Learning

When language models (LMs) are trained via reinforcement learning (RL) to generate natural language “reasoning chains”, their performance improves on a variety of difficult question answering tasks. Today, almost all successful applications of RL for reasoning use binary reward functions that evaluate the correctness of LM outputs. Because such reward functions do not penalize guessing or low-confidence outputs, they often have the unintended side-effect of degrading calibration and increasing the rate at which LMs generate incorrect responses (or “hallucinate”) in other problem domains. This paper describes RLCR (Reinforcement Learning with Calibration Rewards), an approach to training reasoning models that jointly improves accuracy and calibrated confidence estimation. During RLCR, LMs generate both predictions and numerical confidence estimates after reasoning. They are trained to optimize a reward function that augments a binary correctness score with a Brier score—a scoring rule for confidence estimates that incentivizes calibrated prediction. We first prove that this reward function (or any analogous reward function that uses a bounded, proper scoring rule) yields models whose predictions are both accurate and well-calibrated. We next show that across diverse datasets, RLCR substantially improves calibration with no loss in accuracy, on both in-domain and out-of-domain evaluations—outperforming both ordinary RL training and classifiers trained to assign post-hoc confidence scores. While ordinary RL hurts calibration, RLCR improves it. Finally, we demonstrate that verbalized confidence can be leveraged at test time to improve accuracy and calibration via confidence-weighted scaling methods. Our results show that explicitly optimizing for calibration can produce more generally reliable reasoning models.

RLCR provably incentivizes both accuracy and calibration: RRLCR is maximized when models output the answer most likely to be correct, along with a calibrated estimate of their probability of success. In other words, RRLCR is maximized by LM outputs (y, q) for which y maximizes p(1y≡y∗ ), and q = p(1y≡y∗ ). We show that such an objective can be constructed if any bounded, proper scoring rule is used for the calibration term. Notably, while the ubiquitous log-likelihood loss is itself a proper scoring rule, it does not have this property, and can incentivize models to output incorrect answers.