Reinforcement Learning for LLMs

Can model confidence alone replace external answer verification?

Can LLMs use their own certainty signals instead of external verifiers to improve reasoning? This matters for scaling beyond domains where correct answers can be automatically checked.

Note · 2026-02-22 · sourced from RLVR
How do domain training techniques actually reshape model behavior? How should researchers navigate LLM reasoning research? What does reward learning actually do to model reasoning?

RLVR's reliance on domain-specific verifiers confines it to math and code. Two complementary approaches extend RLVR to general domains by replacing external verification with intrinsic signals.

RLPR (Reinforcement Learning with Reference Probability Reward) uses the LLM's own token probability of generating a reference answer as the reward signal. The probability reflects how well the reasoning process leads to the correct answer and measures how likely the model is to take the correct action. Two key innovations: (1) a Probability-based Reward computed from average decoding probabilities of reference answer tokens, showing better robustness than naive sequence likelihood, and (2) stabilization methods to address the high variance inherent in probability-based rewards. RLPR consistently improves reasoning across Gemma, Llama, and Qwen models on both general-domain and mathematical benchmarks.

INTUITOR goes further: it uses the model's own confidence — self-certainty measured as average KL divergence between the output distribution and a uniform distribution — as its sole reward signal. No reference answers, no external verifiers, no labeled data. The approach is simple: replace the verifiable reward in GRPO with self-certainty scores. The mechanism builds on the observation that LLMs exhibit lower confidence on difficult problems; optimizing for confidence should drive the model toward more reliable reasoning.

Both approaches raise the same fundamental question for future AI: as models develop capabilities beyond human evaluation, self-generated signals may be the only viable training pathway. Since Can model confidence work as a reward signal for reasoning?, there is convergent evidence that intrinsic confidence signals can serve dual roles — improving both performance and reliability.

Since Can reasoning RL work without verifying generated answers?, RLPR and INTUITOR represent the next step: progressively weaker assumptions about what external signal is needed, from reference verification to reference probability to pure self-certainty.


Source: RLVR

Related concepts in this collection

Concept map
13 direct connections · 94 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

llm intrinsic probability of generating a correct answer can replace external verifiers as reward signal — extending rlvr to general domains