Language Understanding and Pragmatics Psychology and Social Cognition

Why do preference models favor surface features over substance?

Preference models show systematic bias toward length, structure, jargon, sycophancy, and vagueness—features humans actively dislike. Understanding this 40% divergence reveals whether it stems from training data artifacts or architectural constraints.

Note · 2026-02-23 · sourced from Flaws
Do reasoning traces show how models actually think?

Preference models — both reward models and LLM evaluators — consistently favor responses exhibiting five idiosyncratic bias features, even when these features add no substantive value. Using controlled counterfactual pairs that amplify one bias dimension while holding others constant, the miscalibration rate reaches approximately 40% divergence from human preferences.

The five dimensions:

The correlation structure reveals the mechanism. Bias features show moderate-to-strong positive correlation with model preference labels (mean r_model = +0.36) but mild negative correlation with human preference labels (mean r_human = -0.12). Models are not just slightly miscalibrated — they are systematically inverted on what these features signal.

Sycophancy divergence is the most extreme: LLM evaluators show 75-85% skew toward sycophantic responses versus ~50% for human annotators. This confirms that Can LLM judges be fooled by fake credentials and formatting? extends beyond judge biases into the reward model layer.

The downstream consequences cascade: reward models incentivize reward hacking toward these proxy features; evaluators distort benchmark conclusions; optimization toward surface properties diverges from human preferences. Counterfactual data augmentation (CDA) using synthesized contrastive examples partially corrects the miscalibration but does not eliminate it.


Source: Flaws

Related concepts in this collection

Concept map
19 direct connections · 162 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

preference model miscalibration across five bias dimensions diverges from human preferences by 40 percent driven by training data artifacts