Reinforcement Learning for LLMs Language Understanding and Pragmatics LLM Reasoning and Architecture

Why do reasoning models fail at predicting disagreement?

RLVR models optimize for single correct answers, but many real tasks involve legitimate disagreement among annotators. Does this optimization fundamentally suppress the model's ability to capture when humans reasonably disagree?

Note · 2026-02-22 · sourced from RLVR
How should researchers navigate LLM reasoning research? Do reasoning traces show how models actually think? What does reward learning actually do to model reasoning?

RLVR training optimizes for tasks with single correct answers — math solutions, code outputs, deterministic verifications. This optimization has a side effect: RLVR-trained models significantly degrade at predicting the distribution of human annotation disagreements, particularly when annotation variance is high. The models become better at deterministic goals (predicting the majority annotation) but worse at probabilistic goals (predicting the proportion of disagreements).

The contrast with RLHF models is revealing. For RLHF-trained models, Chain-of-Thought reasoning significantly improves disagreement prediction. For RLVR models, forcing additional reasoning effort does not improve — and can worsen — disagreement prediction. The reasoning pathways that RLVR develops are optimized for convergence toward a single answer, not for representing the legitimate spread of human interpretations.

This connects to a broader pattern: since Why do readers interpret the same sentence so differently?, tasks that require capturing this multiplicity are structurally mismatched with RLVR's optimization signal. The verifiable reward framework assumes one right answer exists. Many real-world annotation tasks involve multiple valid perspectives — precisely the scenario where RLVR models fail.

Since Do standard NLP benchmarks hide LLM ambiguity failures?, majority-label evaluation conceals this degradation. A model that perfectly predicts the majority vote may be useless at capturing the 40% of annotators who disagree — and that disagreement often carries the most informative signal about task subjectivity and sample ambiguity.

The pattern connects to a broader optimization cost: since Does preference optimization harm conversational understanding?, both RLVR and RLHF exhibit the same narrowing dynamic through different mechanisms. RLHF optimizes for single-turn helpfulness, eroding conversational grounding acts; RLVR optimizes for deterministic correctness, eroding sensitivity to legitimate interpretive variance. Both sacrifice multiplicity for confidence.

Since Does binary reward training hurt model calibration?, RLVR's disagreement degradation is a specific case of binary reward's calibration failure. A binary correct/incorrect signal cannot represent the distribution of human disagreement — it structurally encodes the assumption that one answer exists. The calibration fix (adding a proper scoring rule) addresses confidence-accuracy alignment but not the deeper problem of variance suppression in inherently multi-answer tasks.

The practical implication for using LLM annotators: RLVR models may be actively worse than non-reasoning models for subjective annotation tasks. The reasoning that helps with math actively hurts with ambiguity.


Source: RLVR

Related concepts in this collection

Concept map
14 direct connections · 164 in 2-hop network ·dense cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

rlvr reasoning models degrade at predicting human annotation disagreements — optimization for deterministic answers suppresses sensitivity to legitimate variance